This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-03-29 02:30
Elapsed2h0m
Revisionrelease-0.2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/d10e88f3-6f16-4b77-b85b-8d575026e41d/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/d10e88f3-6f16-4b77-b85b-8d575026e41d/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 125 lines ...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: ac344045-7c9d-4650-b3a8-58a3679d6fe5
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_go/releases/download/v0.22.2/rules_go-v0.22.2.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/kubernetes/repo-infra/archive/v0.0.3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
    currently loading: test/e2e ... (3 packages)
Analyzing: 3 targets (3 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
... skipping 1683 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={u'path': u'/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={u'path': u'/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=60   changed=46   unreachable=0    failed=0    skipped=72   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 409 lines ...
node/test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal condition met
node/test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal condition met
node/test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal condition met
Conformance test: not doing test setup.
I0329 03:01:48.765867   24871 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0329 03:01:48.766822   24871 e2e.go:124] Starting e2e run "7b25233e-14d7-4c6a-9869-f5fa4150e456" on Ginkgo node 1
{"msg":"Test Suite starting","total":283,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1585450907 - Will randomize all specs
Will run 283 of 4993 specs

Mar 29 03:01:48.785: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:01:48.796: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Mar 29 03:01:48.952: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Mar 29 03:01:49.105: INFO: The status of Pod calico-node-ffn9v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 29 03:01:49.105: INFO: 21 / 22 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Mar 29 03:01:49.105: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 29 03:01:49.105: INFO: POD                NODE                                                      PHASE    GRACE  CONDITIONS
Mar 29 03:01:49.105: INFO: calico-node-ffn9v  test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:25 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:25 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:25 +0000 UTC  }]
Mar 29 03:01:49.105: INFO: 
Mar 29 03:01:51.258: INFO: The status of Pod calico-node-ffn9v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Mar 29 03:01:51.258: INFO: 21 / 22 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Mar 29 03:01:51.258: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Mar 29 03:01:51.258: INFO: POD                NODE                                                      PHASE    GRACE  CONDITIONS
Mar 29 03:01:51.258: INFO: calico-node-ffn9v  test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:25 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:25 +0000 UTC ContainersNotReady containers with unready status: [calico-node]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:01:25 +0000 UTC  }]
Mar 29 03:01:51.258: INFO: 
Mar 29 03:01:53.259: INFO: 22 / 22 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
... skipping 19 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Mar 29 03:01:53.574: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8787" to be "Succeeded or Failed"
Mar 29 03:01:53.605: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.306129ms
Mar 29 03:01:55.636: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061900267s
Mar 29 03:01:57.667: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092874661s
STEP: Saw pod success
Mar 29 03:01:57.667: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 29 03:01:57.700: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Mar 29 03:01:57.796: INFO: Waiting for pod pod-host-path-test to disappear
Mar 29 03:01:57.828: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:175
Mar 29 03:01:57.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8787" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":1,"skipped":13,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 9 lines ...
Mar 29 03:01:58.117: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 29 03:01:58.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1491" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":283,"completed":2,"skipped":44,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 55 lines ...
Mar 29 03:02:17.205: INFO: stderr: ""
Mar 29 03:02:17.205: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:02:17.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4989" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":283,"completed":3,"skipped":57,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 27 lines ...
Mar 29 03:02:38.232: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:02:38.487: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 29 03:02:38.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2044" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":283,"completed":4,"skipped":72,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-5cf6
STEP: Creating a pod to test atomic-volume-subpath
Mar 29 03:02:38.818: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5cf6" in namespace "subpath-4970" to be "Succeeded or Failed"
Mar 29 03:02:38.850: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.309736ms
Mar 29 03:02:40.883: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 2.064686676s
Mar 29 03:02:42.913: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 4.095370038s
Mar 29 03:02:44.945: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 6.126540479s
Mar 29 03:02:46.975: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 8.157075813s
Mar 29 03:02:49.007: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 10.188614856s
Mar 29 03:02:51.037: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 12.219067615s
Mar 29 03:02:53.068: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 14.250149556s
Mar 29 03:02:55.105: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 16.286545337s
Mar 29 03:02:57.136: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 18.317706938s
Mar 29 03:02:59.166: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Running", Reason="", readiness=true. Elapsed: 20.348077882s
Mar 29 03:03:01.198: INFO: Pod "pod-subpath-test-downwardapi-5cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.379401813s
STEP: Saw pod success
Mar 29 03:03:01.198: INFO: Pod "pod-subpath-test-downwardapi-5cf6" satisfied condition "Succeeded or Failed"
Mar 29 03:03:01.229: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-subpath-test-downwardapi-5cf6 container test-container-subpath-downwardapi-5cf6: <nil>
STEP: delete the pod
Mar 29 03:03:01.344: INFO: Waiting for pod pod-subpath-test-downwardapi-5cf6 to disappear
Mar 29 03:03:01.375: INFO: Pod pod-subpath-test-downwardapi-5cf6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5cf6
Mar 29 03:03:01.375: INFO: Deleting pod "pod-subpath-test-downwardapi-5cf6" in namespace "subpath-4970"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 29 03:03:01.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4970" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":283,"completed":5,"skipped":90,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 14 lines ...
Mar 29 03:03:05.581: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:03:18.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6127" for this suite.
STEP: Destroying namespace "webhook-6127-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":283,"completed":6,"skipped":118,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:03:37.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-62" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":283,"completed":7,"skipped":126,"failed":0}
S
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:03:37.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4296" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":283,"completed":8,"skipped":127,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Mar 29 03:03:48.747: INFO: Unable to read jessie_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:48.781: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:48.813: INFO: Unable to read jessie_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:48.844: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:48.876: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:48.908: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:49.103: INFO: Lookups using dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9039 wheezy_tcp@dns-test-service.dns-9039 wheezy_udp@dns-test-service.dns-9039.svc wheezy_tcp@dns-test-service.dns-9039.svc wheezy_udp@_http._tcp.dns-test-service.dns-9039.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9039.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9039 jessie_tcp@dns-test-service.dns-9039 jessie_udp@dns-test-service.dns-9039.svc jessie_tcp@dns-test-service.dns-9039.svc jessie_udp@_http._tcp.dns-test-service.dns-9039.svc jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc]

Mar 29 03:03:54.136: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.285: INFO: Unable to read wheezy_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
... skipping 5 lines ...
Mar 29 03:03:54.686: INFO: Unable to read jessie_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.718: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.750: INFO: Unable to read jessie_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.783: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.815: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:54.846: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:55.038: INFO: Lookups using dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9039 wheezy_tcp@dns-test-service.dns-9039 wheezy_udp@dns-test-service.dns-9039.svc wheezy_tcp@dns-test-service.dns-9039.svc wheezy_udp@_http._tcp.dns-test-service.dns-9039.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9039.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9039 jessie_tcp@dns-test-service.dns-9039 jessie_udp@dns-test-service.dns-9039.svc jessie_tcp@dns-test-service.dns-9039.svc jessie_udp@_http._tcp.dns-test-service.dns-9039.svc jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc]

Mar 29 03:03:59.136: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.167: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.263: INFO: Unable to read wheezy_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
... skipping 5 lines ...
Mar 29 03:03:59.654: INFO: Unable to read jessie_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.718: INFO: Unable to read jessie_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.750: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.783: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:03:59.816: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:00.012: INFO: Lookups using dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9039 wheezy_tcp@dns-test-service.dns-9039 wheezy_udp@dns-test-service.dns-9039.svc wheezy_tcp@dns-test-service.dns-9039.svc wheezy_udp@_http._tcp.dns-test-service.dns-9039.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9039.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9039 jessie_tcp@dns-test-service.dns-9039 jessie_udp@dns-test-service.dns-9039.svc jessie_tcp@dns-test-service.dns-9039.svc jessie_udp@_http._tcp.dns-test-service.dns-9039.svc jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc]

Mar 29 03:04:04.135: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.168: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.200: INFO: Unable to read wheezy_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.232: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.264: INFO: Unable to read wheezy_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
... skipping 5 lines ...
Mar 29 03:04:04.648: INFO: Unable to read jessie_udp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039 from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.713: INFO: Unable to read jessie_udp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.746: INFO: Unable to read jessie_tcp@dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.777: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:04.809: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:05.006: INFO: Lookups using dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9039 wheezy_tcp@dns-test-service.dns-9039 wheezy_udp@dns-test-service.dns-9039.svc wheezy_tcp@dns-test-service.dns-9039.svc wheezy_udp@_http._tcp.dns-test-service.dns-9039.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9039.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9039 jessie_tcp@dns-test-service.dns-9039 jessie_udp@dns-test-service.dns-9039.svc jessie_tcp@dns-test-service.dns-9039.svc jessie_udp@_http._tcp.dns-test-service.dns-9039.svc jessie_tcp@_http._tcp.dns-test-service.dns-9039.svc]

Mar 29 03:04:09.173: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:09.339: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:09.371: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9039.svc from pod dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4: the server could not find the requested resource (get pods dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4)
Mar 29 03:04:10.022: INFO: Lookups using dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4 failed for: [wheezy_tcp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.dns-9039.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9039.svc]

Mar 29 03:04:15.041: INFO: DNS probes using dns-9039/dns-test-202b9e73-b4c8-45d7-8a7b-720bd3d9d4b4 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:04:15.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9039" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":283,"completed":9,"skipped":139,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Mar 29 03:04:20.735: INFO: Deleting pod "var-expansion-ea0021c7-b904-4967-a79b-9f887c9e348a" in namespace "var-expansion-5441"
Mar 29 03:04:20.785: INFO: Wait up to 5m0s for pod "var-expansion-ea0021c7-b904-4967-a79b-9f887c9e348a" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 03:04:58.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5441" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":283,"completed":10,"skipped":181,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 29 03:04:58.940: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 29 03:04:59.114: INFO: Waiting up to 5m0s for pod "downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4" in namespace "downward-api-6678" to be "Succeeded or Failed"
Mar 29 03:04:59.149: INFO: Pod "downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.228909ms
Mar 29 03:05:01.187: INFO: Pod "downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072938703s
STEP: Saw pod success
Mar 29 03:05:01.187: INFO: Pod "downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4" satisfied condition "Succeeded or Failed"
Mar 29 03:05:01.217: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4 container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:05:01.310: INFO: Waiting for pod downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4 to disappear
Mar 29 03:05:01.342: INFO: Pod downward-api-c892ccc2-15bd-42d9-ba5c-aebe395852e4 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 29 03:05:01.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6678" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":283,"completed":11,"skipped":212,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Mar 29 03:05:01.752: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9464b4d8-0bba-4092-8744-087345061af7", Controller:(*bool)(0xc001adafc6), BlockOwnerDeletion:(*bool)(0xc001adafc7)}}
Mar 29 03:05:01.786: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"46e8d998-37c9-404c-9692-88981488b8d8", Controller:(*bool)(0xc001d63b96), BlockOwnerDeletion:(*bool)(0xc001d63b97)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 29 03:05:06.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3166" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":283,"completed":12,"skipped":218,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 29 03:05:09.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-150" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":283,"completed":13,"skipped":220,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:05:16.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8000" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":283,"completed":14,"skipped":248,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:05:19.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8098" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":15,"skipped":270,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 29 03:05:19.337: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 29 03:05:19.514: INFO: Waiting up to 5m0s for pod "downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e" in namespace "downward-api-4757" to be "Succeeded or Failed"
Mar 29 03:05:19.549: INFO: Pod "downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.765757ms
Mar 29 03:05:21.579: INFO: Pod "downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065269566s
STEP: Saw pod success
Mar 29 03:05:21.579: INFO: Pod "downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e" satisfied condition "Succeeded or Failed"
Mar 29 03:05:21.610: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:05:21.692: INFO: Waiting for pod downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e to disappear
Mar 29 03:05:21.722: INFO: Pod downward-api-afe406ca-2ccb-472f-8266-8d893468ca8e no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 29 03:05:21.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4757" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":283,"completed":16,"skipped":356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 20 lines ...
Mar 29 03:05:44.088: INFO: The status of Pod test-webserver-679665f5-20b8-4602-88c8-6446579382e9 is Running (Ready = true)
Mar 29 03:05:44.118: INFO: Container started at 2020-03-29 03:05:22 +0000 UTC, pod became ready at 2020-03-29 03:05:42 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:05:44.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3582" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":283,"completed":17,"skipped":402,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:05:44.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c" in namespace "projected-2050" to be "Succeeded or Failed"
Mar 29 03:05:44.419: INFO: Pod "downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.296612ms
Mar 29 03:05:46.455: INFO: Pod "downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067749969s
STEP: Saw pod success
Mar 29 03:05:46.455: INFO: Pod "downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c" satisfied condition "Succeeded or Failed"
Mar 29 03:05:46.485: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c container client-container: <nil>
STEP: delete the pod
Mar 29 03:05:46.569: INFO: Waiting for pod downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c to disappear
Mar 29 03:05:46.600: INFO: Pod downwardapi-volume-5512f5de-e84d-4fc8-8e7c-0e1be05f851c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:05:46.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2050" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":18,"skipped":419,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-9dcc273c-a21e-4439-8ee9-1413460d0c03
STEP: Creating a pod to test consume configMaps
Mar 29 03:05:46.922: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e" in namespace "projected-9970" to be "Succeeded or Failed"
Mar 29 03:05:46.953: INFO: Pod "pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.436705ms
Mar 29 03:05:48.984: INFO: Pod "pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062004548s
STEP: Saw pod success
Mar 29 03:05:48.984: INFO: Pod "pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e" satisfied condition "Succeeded or Failed"
Mar 29 03:05:49.015: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:05:49.096: INFO: Waiting for pod pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e to disappear
Mar 29 03:05:49.128: INFO: Pod pod-projected-configmaps-0205c322-62e7-4562-ae7d-1b5aa0c8814e no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:05:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9970" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":19,"skipped":446,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Mar 29 03:07:04.134: INFO: Terminating ReplicationController wrapped-volume-race-65ccb9e2-30f0-458b-afce-653274c7dab9 pods took: 400.213615ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 29 03:07:20.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7324" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":283,"completed":20,"skipped":453,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:07:25.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-6488" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":283,"completed":21,"skipped":457,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:07:26.241: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 29 03:07:26.420: INFO: Waiting up to 5m0s for pod "pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f" in namespace "emptydir-1005" to be "Succeeded or Failed"
Mar 29 03:07:26.459: INFO: Pod "pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.38296ms
Mar 29 03:07:28.490: INFO: Pod "pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069246714s
STEP: Saw pod success
Mar 29 03:07:28.490: INFO: Pod "pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f" satisfied condition "Succeeded or Failed"
Mar 29 03:07:28.520: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f container test-container: <nil>
STEP: delete the pod
Mar 29 03:07:28.616: INFO: Waiting for pod pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f to disappear
Mar 29 03:07:28.647: INFO: Pod pod-729d36a7-08ff-4e51-8d92-dc7e1a10846f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:07:28.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1005" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":22,"skipped":470,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-2f5c63df-548f-45d2-a0b1-d718d7961222
STEP: Creating a pod to test consume secrets
Mar 29 03:07:28.936: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9" in namespace "projected-4780" to be "Succeeded or Failed"
Mar 29 03:07:28.966: INFO: Pod "pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.624424ms
Mar 29 03:07:30.996: INFO: Pod "pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060180343s
STEP: Saw pod success
Mar 29 03:07:30.997: INFO: Pod "pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9" satisfied condition "Succeeded or Failed"
Mar 29 03:07:31.026: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:07:31.106: INFO: Waiting for pod pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9 to disappear
Mar 29 03:07:31.136: INFO: Pod pod-projected-secrets-5a5289c8-f239-44d3-b2db-5acb122441d9 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 03:07:31.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4780" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":23,"skipped":472,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Mar 29 03:07:31.602: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1724 /api/v1/namespaces/watch-1724/configmaps/e2e-watch-test-watch-closed 88285f92-4e32-4db5-8eac-ea9a312f0186 3879 0 2020-03-29 03:07:31 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 29 03:07:31.602: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1724 /api/v1/namespaces/watch-1724/configmaps/e2e-watch-test-watch-closed 88285f92-4e32-4db5-8eac-ea9a312f0186 3880 0 2020-03-29 03:07:31 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 29 03:07:31.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1724" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":283,"completed":24,"skipped":491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:07:47.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6173" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":283,"completed":25,"skipped":514,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 03:07:47.615: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-dd23a29e-48dc-4cce-8c13-24056d8903d7
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:07:47.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-798" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":283,"completed":26,"skipped":516,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 29 03:07:48.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8659" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":283,"completed":27,"skipped":554,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:08:04.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5795" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":283,"completed":28,"skipped":569,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 30 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:08:09.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6390" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":283,"completed":29,"skipped":570,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 29 03:08:09.492: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 29 03:08:09.663: INFO: Waiting up to 5m0s for pod "downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e" in namespace "downward-api-6275" to be "Succeeded or Failed"
Mar 29 03:08:09.695: INFO: Pod "downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.920879ms
Mar 29 03:08:11.724: INFO: Pod "downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061410752s
STEP: Saw pod success
Mar 29 03:08:11.724: INFO: Pod "downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e" satisfied condition "Succeeded or Failed"
Mar 29 03:08:11.753: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:08:11.831: INFO: Waiting for pod downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e to disappear
Mar 29 03:08:11.862: INFO: Pod downward-api-3c3d37cd-e7e4-44b7-81df-88c230a63b6e no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 29 03:08:11.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6275" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":283,"completed":30,"skipped":572,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-0cb03b67-39be-4f7f-ad93-fc3ef297d456
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:08:16.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1701" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":31,"skipped":589,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Mar 29 03:08:20.503: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-9181 pod-service-account-6bb23d14-70b2-4a5c-9f83-d200453472a4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 29 03:08:20.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9181" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":283,"completed":32,"skipped":592,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 03:08:21.261: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f34831cf-7b60-4d95-8beb-0005c72f3c8d" in namespace "security-context-test-9415" to be "Succeeded or Failed"
Mar 29 03:08:21.291: INFO: Pod "busybox-readonly-false-f34831cf-7b60-4d95-8beb-0005c72f3c8d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.130678ms
Mar 29 03:08:23.323: INFO: Pod "busybox-readonly-false-f34831cf-7b60-4d95-8beb-0005c72f3c8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06216265s
Mar 29 03:08:23.323: INFO: Pod "busybox-readonly-false-f34831cf-7b60-4d95-8beb-0005c72f3c8d" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 29 03:08:23.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9415" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":283,"completed":33,"skipped":612,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 29 03:08:23.558: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:08:24.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1323" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":283,"completed":34,"skipped":628,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:08:32.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7747" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":283,"completed":35,"skipped":633,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 29 03:08:36.521: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 29 03:08:36.521: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:08:36.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1014" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":283,"completed":36,"skipped":643,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:09:36.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1660" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":283,"completed":37,"skipped":677,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0329 03:09:47.591740   24871 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 29 03:09:47.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3651" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":283,"completed":38,"skipped":701,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 29 03:09:50.536: INFO: Successfully updated pod "labelsupdate79ff33cb-89d6-460e-9a13-f88339a3ad2b"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:09:52.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6700" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":39,"skipped":701,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

W0329 03:09:59.114107   24871 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 29 03:09:59.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5690" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":283,"completed":40,"skipped":765,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:09:59.187: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 29 03:09:59.358: INFO: Waiting up to 5m0s for pod "pod-74b68278-1d1f-445b-9ce7-9081fa384d62" in namespace "emptydir-7742" to be "Succeeded or Failed"
Mar 29 03:09:59.390: INFO: Pod "pod-74b68278-1d1f-445b-9ce7-9081fa384d62": Phase="Pending", Reason="", readiness=false. Elapsed: 32.222887ms
Mar 29 03:10:01.421: INFO: Pod "pod-74b68278-1d1f-445b-9ce7-9081fa384d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062901566s
STEP: Saw pod success
Mar 29 03:10:01.421: INFO: Pod "pod-74b68278-1d1f-445b-9ce7-9081fa384d62" satisfied condition "Succeeded or Failed"
Mar 29 03:10:01.451: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-74b68278-1d1f-445b-9ce7-9081fa384d62 container test-container: <nil>
STEP: delete the pod
Mar 29 03:10:01.540: INFO: Waiting for pod pod-74b68278-1d1f-445b-9ce7-9081fa384d62 to disappear
Mar 29 03:10:01.571: INFO: Pod pod-74b68278-1d1f-445b-9ce7-9081fa384d62 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:10:01.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7742" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":41,"skipped":782,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:10:06.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9837" for this suite.
STEP: Destroying namespace "webhook-9837-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":283,"completed":42,"skipped":820,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Mar 29 03:10:09.256: INFO: Terminating Job.batch foo pods took: 100.184133ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 29 03:10:47.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6338" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":283,"completed":43,"skipped":833,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:10:47.542: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac" in namespace "downward-api-2190" to be "Succeeded or Failed"
Mar 29 03:10:47.572: INFO: Pod "downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac": Phase="Pending", Reason="", readiness=false. Elapsed: 29.49299ms
Mar 29 03:10:49.607: INFO: Pod "downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064790544s
STEP: Saw pod success
Mar 29 03:10:49.607: INFO: Pod "downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac" satisfied condition "Succeeded or Failed"
Mar 29 03:10:49.637: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac container client-container: <nil>
STEP: delete the pod
Mar 29 03:10:49.718: INFO: Waiting for pod downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac to disappear
Mar 29 03:10:49.749: INFO: Pod downwardapi-volume-01b0ec7b-f595-408f-86f4-4768705a9aac no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:10:49.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2190" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":283,"completed":44,"skipped":847,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 133 lines ...
Mar 29 03:11:17.316: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4793/pods","resourceVersion":"5808"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:11:17.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4793" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":283,"completed":45,"skipped":870,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 29 03:11:17.636: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 29 03:11:20.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3703" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":283,"completed":46,"skipped":919,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Mar 29 03:12:11.522: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7706 /api/v1/namespaces/watch-7706/configmaps/e2e-watch-test-configmap-b 0a47efe1-7019-468f-bea7-565e22a32817 5992 0 2020-03-29 03:12:01 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 29 03:12:11.522: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7706 /api/v1/namespaces/watch-7706/configmaps/e2e-watch-test-configmap-b 0a47efe1-7019-468f-bea7-565e22a32817 5992 0 2020-03-29 03:12:01 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 29 03:12:21.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7706" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":283,"completed":47,"skipped":940,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:12:26.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5367" for this suite.
STEP: Destroying namespace "webhook-5367-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":283,"completed":48,"skipped":942,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 32.144857ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 29 03:12:27.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3698" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":283,"completed":49,"skipped":953,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:12:29.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5774" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":283,"completed":50,"skipped":953,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 29 03:12:32.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-763" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":283,"completed":51,"skipped":972,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:12:37.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-354" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":283,"completed":52,"skipped":1032,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 29 03:12:40.443: INFO: Successfully updated pod "labelsupdatee908b790-92b2-49d6-ae71-99d913d34c5c"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:12:44.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3804" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":53,"skipped":1033,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 24 lines ...
Mar 29 03:13:00.606: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:13:01.866: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 29 03:13:01.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-44" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":54,"skipped":1045,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 29 03:13:02.318: INFO: stderr: ""
Mar 29 03:13:02.318: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://34.107.148.68:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://34.107.148.68:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:13:02.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5600" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":283,"completed":55,"skipped":1061,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-e6bbfd3d-4d15-4de4-87ac-69045529286a
STEP: Creating a pod to test consume configMaps
Mar 29 03:13:02.608: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff" in namespace "configmap-601" to be "Succeeded or Failed"
Mar 29 03:13:02.644: INFO: Pod "pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff": Phase="Pending", Reason="", readiness=false. Elapsed: 35.585689ms
Mar 29 03:13:04.674: INFO: Pod "pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065833772s
STEP: Saw pod success
Mar 29 03:13:04.674: INFO: Pod "pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff" satisfied condition "Succeeded or Failed"
Mar 29 03:13:04.704: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:13:04.798: INFO: Waiting for pod pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff to disappear
Mar 29 03:13:04.830: INFO: Pod pod-configmaps-e2ad0f24-85a1-4ecf-9729-f2e8736611ff no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:13:04.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-601" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":56,"skipped":1066,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:13:11.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9027" for this suite.
STEP: Destroying namespace "webhook-9027-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":283,"completed":57,"skipped":1075,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-f870fae7-14ac-4665-9832-b8caba751a1f
STEP: Creating a pod to test consume secrets
Mar 29 03:13:12.071: INFO: Waiting up to 5m0s for pod "pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005" in namespace "secrets-324" to be "Succeeded or Failed"
Mar 29 03:13:12.101: INFO: Pod "pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.14305ms
Mar 29 03:13:14.131: INFO: Pod "pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06010199s
STEP: Saw pod success
Mar 29 03:13:14.132: INFO: Pod "pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005" satisfied condition "Succeeded or Failed"
Mar 29 03:13:14.161: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005 container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:13:14.244: INFO: Waiting for pod pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005 to disappear
Mar 29 03:13:14.275: INFO: Pod pod-secrets-51ef4520-1acb-4945-a8fc-20aaf3327005 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:13:14.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-324" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":58,"skipped":1080,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 29 03:13:16.780: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:16.814: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:16.913: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:16.948: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:16.985: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:17.025: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:17.093: INFO: Lookups using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local]

Mar 29 03:13:22.124: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.155: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.187: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.217: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.315: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.346: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.377: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.408: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:22.472: INFO: Lookups using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local]

Mar 29 03:13:27.125: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.156: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.189: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.221: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.318: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.349: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.379: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.410: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:27.473: INFO: Lookups using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local]

Mar 29 03:13:32.125: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.155: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.187: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.218: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.314: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.345: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.377: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.408: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:32.472: INFO: Lookups using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local]

Mar 29 03:13:37.124: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.155: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.187: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.218: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.313: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.349: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.380: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.411: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:37.473: INFO: Lookups using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local]

Mar 29 03:13:42.127: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.158: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.189: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.220: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.317: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.350: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.383: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.414: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local from pod dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7: the server could not find the requested resource (get pods dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7)
Mar 29 03:13:42.480: INFO: Lookups using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6382.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6382.svc.cluster.local jessie_udp@dns-test-service-2.dns-6382.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6382.svc.cluster.local]

Mar 29 03:13:47.491: INFO: DNS probes using dns-6382/dns-test-daf541c2-7884-499c-a89b-a67529f0a2a7 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:13:47.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6382" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":283,"completed":59,"skipped":1089,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 29 03:13:49.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5722" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":60,"skipped":1129,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 68 lines ...
Mar 29 03:14:17.429: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1790/pods","resourceVersion":"6902"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:14:17.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1790" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":283,"completed":61,"skipped":1147,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 03:14:20.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3175" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":283,"completed":62,"skipped":1174,"failed":0}
S
------------------------------
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:14:20.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8245" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":283,"completed":63,"skipped":1175,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:14:20.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4774" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":283,"completed":64,"skipped":1195,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 29 03:14:22.101: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 29 03:14:22.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1567" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":283,"completed":65,"skipped":1204,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 29 03:14:32.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0329 03:14:32.576462   24871 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-5272" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":283,"completed":66,"skipped":1211,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Mar 29 03:15:13.249: INFO: Waiting for statefulset status.replicas updated to 0
Mar 29 03:15:13.278: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 29 03:15:13.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4912" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":283,"completed":67,"skipped":1216,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:15:13.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901" in namespace "projected-8785" to be "Succeeded or Failed"
Mar 29 03:15:13.717: INFO: Pod "downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901": Phase="Pending", Reason="", readiness=false. Elapsed: 33.10743ms
Mar 29 03:15:15.750: INFO: Pod "downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066238943s
STEP: Saw pod success
Mar 29 03:15:15.750: INFO: Pod "downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901" satisfied condition "Succeeded or Failed"
Mar 29 03:15:15.780: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901 container client-container: <nil>
STEP: delete the pod
Mar 29 03:15:15.874: INFO: Waiting for pod downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901 to disappear
Mar 29 03:15:15.904: INFO: Pod downwardapi-volume-0d81df40-2b53-426f-b5e2-e59676647901 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:15:15.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8785" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":68,"skipped":1241,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:15:16.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c" in namespace "projected-6676" to be "Succeeded or Failed"
Mar 29 03:15:16.193: INFO: Pod "downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.62743ms
Mar 29 03:15:18.223: INFO: Pod "downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059816702s
STEP: Saw pod success
Mar 29 03:15:18.223: INFO: Pod "downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c" satisfied condition "Succeeded or Failed"
Mar 29 03:15:18.254: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c container client-container: <nil>
STEP: delete the pod
Mar 29 03:15:18.336: INFO: Waiting for pod downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c to disappear
Mar 29 03:15:18.367: INFO: Pod downwardapi-volume-e19097ff-436c-42ac-88a0-45456fb5ab7c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:15:18.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6676" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":69,"skipped":1241,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 29 03:15:20.807: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 29 03:15:20.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5470" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":283,"completed":70,"skipped":1263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-bcd7c8a1-e7bd-4f11-8c60-0d4330793b10
STEP: Creating a pod to test consume configMaps
Mar 29 03:15:21.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3" in namespace "projected-2846" to be "Succeeded or Failed"
Mar 29 03:15:21.208: INFO: Pod "pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3": Phase="Pending", Reason="", readiness=false. Elapsed: 33.865559ms
Mar 29 03:15:23.239: INFO: Pod "pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064477481s
STEP: Saw pod success
Mar 29 03:15:23.239: INFO: Pod "pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3" satisfied condition "Succeeded or Failed"
Mar 29 03:15:23.269: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:15:23.365: INFO: Waiting for pod pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3 to disappear
Mar 29 03:15:23.396: INFO: Pod pod-projected-configmaps-8aaf8373-a2c6-4686-9fde-8ee0537472e3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:15:23.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2846" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":71,"skipped":1289,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 29 03:15:23.492: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Mar 29 03:15:23.666: INFO: Waiting up to 5m0s for pod "client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523" in namespace "containers-9701" to be "Succeeded or Failed"
Mar 29 03:15:23.701: INFO: Pod "client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523": Phase="Pending", Reason="", readiness=false. Elapsed: 34.993767ms
Mar 29 03:15:25.732: INFO: Pod "client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065267994s
STEP: Saw pod success
Mar 29 03:15:25.732: INFO: Pod "client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523" satisfied condition "Succeeded or Failed"
Mar 29 03:15:25.761: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523 container test-container: <nil>
STEP: delete the pod
Mar 29 03:15:25.844: INFO: Waiting for pod client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523 to disappear
Mar 29 03:15:25.874: INFO: Pod client-containers-89ce3dac-71b1-40e4-9f2f-e63e49539523 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 29 03:15:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9701" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":283,"completed":72,"skipped":1301,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 29 03:15:48.868: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:15:49.118: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 29 03:15:49.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9835" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":283,"completed":73,"skipped":1309,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:15:49.211: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 29 03:15:49.394: INFO: Waiting up to 5m0s for pod "pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b" in namespace "emptydir-1267" to be "Succeeded or Failed"
Mar 29 03:15:49.424: INFO: Pod "pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.384601ms
Mar 29 03:15:51.454: INFO: Pod "pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.05979231s
STEP: Saw pod success
Mar 29 03:15:51.454: INFO: Pod "pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b" satisfied condition "Succeeded or Failed"
Mar 29 03:15:51.485: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b container test-container: <nil>
STEP: delete the pod
Mar 29 03:15:51.564: INFO: Waiting for pod pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b to disappear
Mar 29 03:15:51.595: INFO: Pod pod-1914f914-bdb6-43b0-a57f-9df3f4f37c1b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:15:51.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1267" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":74,"skipped":1340,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Mar 29 03:18:07.370: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:18:07.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-3614" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":283,"completed":75,"skipped":1357,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:18:18.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3515" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":283,"completed":76,"skipped":1401,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-3c288671-1b37-4f8c-a627-aec516a693ba
STEP: Creating a pod to test consume configMaps
Mar 29 03:18:19.129: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433" in namespace "projected-925" to be "Succeeded or Failed"
Mar 29 03:18:19.163: INFO: Pod "pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433": Phase="Pending", Reason="", readiness=false. Elapsed: 34.588631ms
Mar 29 03:18:21.193: INFO: Pod "pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064622936s
STEP: Saw pod success
Mar 29 03:18:21.194: INFO: Pod "pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433" satisfied condition "Succeeded or Failed"
Mar 29 03:18:21.228: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:18:21.327: INFO: Waiting for pod pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433 to disappear
Mar 29 03:18:21.356: INFO: Pod pod-projected-configmaps-aeb4dd92-3cae-4c88-b594-ac2e12b66433 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:18:21.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-925" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":77,"skipped":1412,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:18:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3785" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":283,"completed":78,"skipped":1420,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Mar 29 03:18:38.337: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:18:41.619: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:18:54.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2585" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":283,"completed":79,"skipped":1422,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-qcf2
STEP: Creating a pod to test atomic-volume-subpath
Mar 29 03:18:55.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qcf2" in namespace "subpath-9948" to be "Succeeded or Failed"
Mar 29 03:18:55.170: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.937088ms
Mar 29 03:18:57.200: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 2.062119723s
Mar 29 03:18:59.230: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.092155122s
Mar 29 03:19:01.261: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 6.12298854s
Mar 29 03:19:03.291: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 8.153169316s
Mar 29 03:19:05.330: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 10.192058094s
Mar 29 03:19:07.366: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 12.227685754s
Mar 29 03:19:09.396: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 14.258168026s
Mar 29 03:19:11.426: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 16.288475941s
Mar 29 03:19:13.457: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 18.319378339s
Mar 29 03:19:15.488: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Running", Reason="", readiness=true. Elapsed: 20.350024115s
Mar 29 03:19:17.518: INFO: Pod "pod-subpath-test-configmap-qcf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.380263698s
STEP: Saw pod success
Mar 29 03:19:17.518: INFO: Pod "pod-subpath-test-configmap-qcf2" satisfied condition "Succeeded or Failed"
Mar 29 03:19:17.548: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-subpath-test-configmap-qcf2 container test-container-subpath-configmap-qcf2: <nil>
STEP: delete the pod
Mar 29 03:19:17.641: INFO: Waiting for pod pod-subpath-test-configmap-qcf2 to disappear
Mar 29 03:19:17.672: INFO: Pod pod-subpath-test-configmap-qcf2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qcf2
Mar 29 03:19:17.672: INFO: Deleting pod "pod-subpath-test-configmap-qcf2" in namespace "subpath-9948"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 29 03:19:17.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9948" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":283,"completed":80,"skipped":1432,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:19:17.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38" in namespace "downward-api-8747" to be "Succeeded or Failed"
Mar 29 03:19:18.009: INFO: Pod "downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38": Phase="Pending", Reason="", readiness=false. Elapsed: 41.124434ms
Mar 29 03:19:20.039: INFO: Pod "downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071072836s
STEP: Saw pod success
Mar 29 03:19:20.039: INFO: Pod "downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38" satisfied condition "Succeeded or Failed"
Mar 29 03:19:20.069: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38 container client-container: <nil>
STEP: delete the pod
Mar 29 03:19:20.149: INFO: Waiting for pod downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38 to disappear
Mar 29 03:19:20.180: INFO: Pod downwardapi-volume-5d60eeec-d002-4603-9ac8-61748b1a1f38 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:19:20.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8747" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":81,"skipped":1444,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:19:40.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1533" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":283,"completed":82,"skipped":1447,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 29 03:19:43.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3542" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":283,"completed":83,"skipped":1454,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 29 03:19:46.052: INFO: Successfully updated pod "annotationupdate0f1a85ea-4c5f-4423-a0c8-8bdecd2d4116"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:19:50.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3189" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":84,"skipped":1454,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 29 03:20:12.848: INFO: Restart count of pod container-probe-1262/liveness-d099a722-4b10-4ee7-815d-221ae058c5dc is now 1 (20.339637977s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:20:12.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1262" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":85,"skipped":1458,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Mar 29 03:20:27.468: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 29 03:20:27.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8651" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":283,"completed":86,"skipped":1463,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-511b638e-5ec2-46ac-8c24-b3e874e52a06
STEP: Creating a pod to test consume secrets
Mar 29 03:20:27.796: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9" in namespace "projected-6578" to be "Succeeded or Failed"
Mar 29 03:20:27.828: INFO: Pod "pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.085479ms
Mar 29 03:20:29.858: INFO: Pod "pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061310036s
STEP: Saw pod success
Mar 29 03:20:29.858: INFO: Pod "pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9" satisfied condition "Succeeded or Failed"
Mar 29 03:20:29.887: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9 container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:20:29.983: INFO: Waiting for pod pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9 to disappear
Mar 29 03:20:30.014: INFO: Pod pod-projected-secrets-5529825b-13e6-48e6-a35f-c5fa2353b0d9 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 03:20:30.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6578" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":87,"skipped":1466,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Mar 29 03:20:32.663: INFO: Pod "test-recreate-deployment-5f94c574ff-pg4jx" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-pg4jx test-recreate-deployment-5f94c574ff- deployment-8367 /api/v1/namespaces/deployment-8367/pods/test-recreate-deployment-5f94c574ff-pg4jx 5413e8cd-3d76-4320-ad17-52a250870fb8 8672 0 2020-03-29 03:20:32 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 2ac37feb-b8e7-4665-af60-b0ec6b686479 0xc002c479e7 0xc002c479e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xc7fq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xc7fq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xc7fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-29 03:20:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 29 03:20:32.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8367" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":88,"skipped":1467,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 76 lines ...
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbc2f webserver-deployment-595b5b9587- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-595b5b9587-xbc2f 4e1c2822-7caf-4dcb-91e0-e504e02fc695 9217 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.15.73/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 858be8c1-5491-411b-ae68-b40d38f2b6b1 0xc004dfba80 0xc004dfba81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-29 03:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.053: INFO: Pod "webserver-deployment-595b5b9587-xckcm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xckcm webserver-deployment-595b5b9587- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-595b5b9587-xckcm 952dba0e-72bc-4c6e-ad5b-5ea8c21fbc6c 9211 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:595b5b9587] map[cni.projectcalico.org/podIP:192.168.15.74/32] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 858be8c1-5491-411b-ae68-b40d38f2b6b1 0xc004dfbc80 0xc004dfbc81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-29 03:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.053: INFO: Pod "webserver-deployment-c7997dcc8-2l2rg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2l2rg webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-2l2rg 37ae4ce4-7572-4389-a3ac-0296d2b0622d 9246 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.80/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004dfbec0 0xc004dfbec1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.053: INFO: Pod "webserver-deployment-c7997dcc8-2p9dj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2p9dj webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-2p9dj 529d2111-a88d-4c92-adaa-54f765a719ac 9017 0 2020-03-29 03:20:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.66/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1e0a0 0xc004c1e0a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.15.66,StartTime:2020-03-29 03:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.15.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.053: INFO: Pod "webserver-deployment-c7997dcc8-2psdj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2psdj webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-2psdj 7a7b786e-7c96-4e6f-ad68-33b7c3031861 9210 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.27/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1e370 0xc004c1e371}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-29 03:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.053: INFO: Pod "webserver-deployment-c7997dcc8-6jqx5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6jqx5 webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-6jqx5 dfa51961-2659-4d18-bacb-33c221a49d0e 9165 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.70/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1e640 0xc004c1e641}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.053: INFO: Pod "webserver-deployment-c7997dcc8-7fb9r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7fb9r webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-7fb9r 08728b70-c69f-41ba-ade3-7a66e27c1867 9168 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.25/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1e7f0 0xc004c1e7f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.054: INFO: Pod "webserver-deployment-c7997dcc8-c46vz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c46vz webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-c46vz 79afb2f2-ae60-41b1-8fc6-a58c7c4c5136 9177 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.71/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1e9d0 0xc004c1e9d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.054: INFO: Pod "webserver-deployment-c7997dcc8-c6qdp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c6qdp webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-c6qdp 8ddba4ac-3aa4-4ed9-b7d1-d7b7fb716f70 9164 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.22/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1eb70 0xc004c1eb71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:,StartTime:2020-03-29 03:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.054: INFO: Pod "webserver-deployment-c7997dcc8-cnbw7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cnbw7 webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-cnbw7 f84e00e5-fccd-442d-ac59-a6158339f9ad 9201 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.26/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1ed20 0xc004c1ed21}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.054: INFO: Pod "webserver-deployment-c7997dcc8-f5t7z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f5t7z webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-f5t7z 95dbab56-9a7f-4ea8-a4ad-d0023d22b509 9021 0 2020-03-29 03:20:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.67/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1eec0 0xc004c1eec1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.15.67,StartTime:2020-03-29 03:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.15.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.055: INFO: Pod "webserver-deployment-c7997dcc8-g76tw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g76tw webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-g76tw 9a619957-faf5-4473-a3e0-aede01418882 9032 0 2020-03-29 03:20:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.68/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1f130 0xc004c1f131}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.15.68,StartTime:2020-03-29 03:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.15.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.055: INFO: Pod "webserver-deployment-c7997dcc8-gj7sj" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gj7sj webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-gj7sj 5414a857-06bc-4248-97d1-44be420c573d 9030 0 2020-03-29 03:20:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.20/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1f390 0xc004c1f391}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:192.168.234.20,StartTime:2020-03-29 03:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.234.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.055: INFO: Pod "webserver-deployment-c7997dcc8-s6djs" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s6djs webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-s6djs b1887cba-29f9-4535-a829-0b27ed703fa4 9252 0 2020-03-29 03:20:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.15.77/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1f6a0 0xc004c1f6a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-29 03:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 29 03:20:42.055: INFO: Pod "webserver-deployment-c7997dcc8-tq669" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tq669 webserver-deployment-c7997dcc8- deployment-6323 /api/v1/namespaces/deployment-6323/pods/webserver-deployment-c7997dcc8-tq669 03ff64ec-95fa-404b-919b-17b9e9311e3a 9026 0 2020-03-29 03:20:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.17/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 aec29089-9eaa-4091-aa4a-0c6e2a9d5b7d 0xc004c1f8c0 0xc004c1f8c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jpx68,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jpx68,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jpx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:20:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.6,PodIP:192.168.234.17,StartTime:2020-03-29 03:20:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.234.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 29 03:20:42.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6323" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":283,"completed":89,"skipped":1488,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Mar 29 03:20:54.484: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Mar 29 03:20:54.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1715" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":283,"completed":90,"skipped":1537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-d57e10a8-f414-483f-8f18-62b7ea3c1f45
STEP: Creating secret with name secret-projected-all-test-volume-85828833-134e-4232-a59d-9a228b55c454
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 29 03:20:54.844: INFO: Waiting up to 5m0s for pod "projected-volume-1440fae4-0b37-4990-8168-10187bd88234" in namespace "projected-5688" to be "Succeeded or Failed"
Mar 29 03:20:54.881: INFO: Pod "projected-volume-1440fae4-0b37-4990-8168-10187bd88234": Phase="Pending", Reason="", readiness=false. Elapsed: 36.985927ms
Mar 29 03:20:56.911: INFO: Pod "projected-volume-1440fae4-0b37-4990-8168-10187bd88234": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067710261s
STEP: Saw pod success
Mar 29 03:20:56.911: INFO: Pod "projected-volume-1440fae4-0b37-4990-8168-10187bd88234" satisfied condition "Succeeded or Failed"
Mar 29 03:20:56.942: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod projected-volume-1440fae4-0b37-4990-8168-10187bd88234 container projected-all-volume-test: <nil>
STEP: delete the pod
Mar 29 03:20:57.022: INFO: Waiting for pod projected-volume-1440fae4-0b37-4990-8168-10187bd88234 to disappear
Mar 29 03:20:57.054: INFO: Pod projected-volume-1440fae4-0b37-4990-8168-10187bd88234 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Mar 29 03:20:57.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5688" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":283,"completed":91,"skipped":1572,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Mar 29 03:20:57.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4870" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":283,"completed":92,"skipped":1580,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-4c87c228-a102-4960-bd17-182cd2247dd3
STEP: Creating a pod to test consume configMaps
Mar 29 03:20:57.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c" in namespace "projected-7681" to be "Succeeded or Failed"
Mar 29 03:20:57.994: INFO: Pod "pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.964301ms
Mar 29 03:21:00.024: INFO: Pod "pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060049962s
STEP: Saw pod success
Mar 29 03:21:00.024: INFO: Pod "pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c" satisfied condition "Succeeded or Failed"
Mar 29 03:21:00.059: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:21:00.139: INFO: Waiting for pod pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c to disappear
Mar 29 03:21:00.173: INFO: Pod pod-projected-configmaps-49631c57-a9eb-43df-ba11-cda49bcb3a4c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:21:00.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7681" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":93,"skipped":1588,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:21:13.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6391" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":283,"completed":94,"skipped":1609,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 29 03:21:16.121: INFO: Initial restart count of pod test-webserver-f0d70801-fe85-494d-b4e5-49928fbca71a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:25:17.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1541" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":95,"skipped":1672,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 29 03:25:22.325: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:22.357: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:22.580: INFO: Unable to read jessie_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:22.612: INFO: Unable to read jessie_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:22.644: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:22.676: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:22.868: INFO: Lookups using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a failed for: [wheezy_udp@dns-test-service.dns-67.svc.cluster.local wheezy_tcp@dns-test-service.dns-67.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_udp@dns-test-service.dns-67.svc.cluster.local jessie_tcp@dns-test-service.dns-67.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local]

Mar 29 03:25:27.902: INFO: Unable to read wheezy_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:27.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:27.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:27.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:28.220: INFO: Unable to read jessie_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:28.251: INFO: Unable to read jessie_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:28.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:28.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:28.503: INFO: Lookups using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a failed for: [wheezy_udp@dns-test-service.dns-67.svc.cluster.local wheezy_tcp@dns-test-service.dns-67.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_udp@dns-test-service.dns-67.svc.cluster.local jessie_tcp@dns-test-service.dns-67.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local]

Mar 29 03:25:32.900: INFO: Unable to read wheezy_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:32.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:32.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:32.995: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:33.217: INFO: Unable to read jessie_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:33.248: INFO: Unable to read jessie_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:33.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:33.312: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:33.503: INFO: Lookups using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a failed for: [wheezy_udp@dns-test-service.dns-67.svc.cluster.local wheezy_tcp@dns-test-service.dns-67.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_udp@dns-test-service.dns-67.svc.cluster.local jessie_tcp@dns-test-service.dns-67.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local]

Mar 29 03:25:37.901: INFO: Unable to read wheezy_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:37.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:37.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:37.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:38.217: INFO: Unable to read jessie_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:38.250: INFO: Unable to read jessie_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:38.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:38.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:38.503: INFO: Lookups using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a failed for: [wheezy_udp@dns-test-service.dns-67.svc.cluster.local wheezy_tcp@dns-test-service.dns-67.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_udp@dns-test-service.dns-67.svc.cluster.local jessie_tcp@dns-test-service.dns-67.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local]

Mar 29 03:25:42.901: INFO: Unable to read wheezy_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:42.931: INFO: Unable to read wheezy_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:42.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:42.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:43.218: INFO: Unable to read jessie_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:43.249: INFO: Unable to read jessie_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:43.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:43.312: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:43.498: INFO: Lookups using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a failed for: [wheezy_udp@dns-test-service.dns-67.svc.cluster.local wheezy_tcp@dns-test-service.dns-67.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_udp@dns-test-service.dns-67.svc.cluster.local jessie_tcp@dns-test-service.dns-67.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local]

Mar 29 03:25:47.902: INFO: Unable to read wheezy_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:47.932: INFO: Unable to read wheezy_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:47.963: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:47.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:48.214: INFO: Unable to read jessie_udp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:48.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:48.276: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:48.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local from pod dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a: the server could not find the requested resource (get pods dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a)
Mar 29 03:25:48.500: INFO: Lookups using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a failed for: [wheezy_udp@dns-test-service.dns-67.svc.cluster.local wheezy_tcp@dns-test-service.dns-67.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_udp@dns-test-service.dns-67.svc.cluster.local jessie_tcp@dns-test-service.dns-67.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-67.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-67.svc.cluster.local]

Mar 29 03:25:53.515: INFO: DNS probes using dns-67/dns-test-9a17ed37-3fda-489a-9d32-181c5f0e657a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:25:53.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-67" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":283,"completed":96,"skipped":1672,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:25:53.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b" in namespace "downward-api-9066" to be "Succeeded or Failed"
Mar 29 03:25:53.983: INFO: Pod "downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.315909ms
Mar 29 03:25:56.013: INFO: Pod "downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061280735s
STEP: Saw pod success
Mar 29 03:25:56.013: INFO: Pod "downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b" satisfied condition "Succeeded or Failed"
Mar 29 03:25:56.043: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b container client-container: <nil>
STEP: delete the pod
Mar 29 03:25:56.135: INFO: Waiting for pod downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b to disappear
Mar 29 03:25:56.166: INFO: Pod downwardapi-volume-7b681050-9188-49a0-98f2-21baeb998e1b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:25:56.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9066" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":97,"skipped":1674,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 29 03:25:56.256: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Mar 29 03:25:56.424: INFO: Waiting up to 5m0s for pod "client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52" in namespace "containers-1168" to be "Succeeded or Failed"
Mar 29 03:25:56.454: INFO: Pod "client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52": Phase="Pending", Reason="", readiness=false. Elapsed: 30.08535ms
Mar 29 03:25:58.485: INFO: Pod "client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061198471s
STEP: Saw pod success
Mar 29 03:25:58.485: INFO: Pod "client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52" satisfied condition "Succeeded or Failed"
Mar 29 03:25:58.515: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52 container test-container: <nil>
STEP: delete the pod
Mar 29 03:25:58.607: INFO: Waiting for pod client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52 to disappear
Mar 29 03:25:58.637: INFO: Pod client-containers-559cb8b4-60a6-407a-8328-91f5a9c87e52 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 29 03:25:58.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1168" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":283,"completed":98,"skipped":1696,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-6r59
STEP: Creating a pod to test atomic-volume-subpath
Mar 29 03:25:58.979: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6r59" in namespace "subpath-7772" to be "Succeeded or Failed"
Mar 29 03:25:59.013: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Pending", Reason="", readiness=false. Elapsed: 33.828213ms
Mar 29 03:26:01.042: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 2.063694984s
Mar 29 03:26:03.073: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 4.094515756s
Mar 29 03:26:05.103: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 6.12457595s
Mar 29 03:26:07.134: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 8.15518503s
Mar 29 03:26:09.164: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 10.18545797s
Mar 29 03:26:11.194: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 12.215783452s
Mar 29 03:26:13.224: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 14.24571166s
Mar 29 03:26:15.255: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 16.276247911s
Mar 29 03:26:17.285: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 18.306516871s
Mar 29 03:26:19.315: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Running", Reason="", readiness=true. Elapsed: 20.336469408s
Mar 29 03:26:21.345: INFO: Pod "pod-subpath-test-projected-6r59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.366051353s
STEP: Saw pod success
Mar 29 03:26:21.345: INFO: Pod "pod-subpath-test-projected-6r59" satisfied condition "Succeeded or Failed"
Mar 29 03:26:21.375: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-subpath-test-projected-6r59 container test-container-subpath-projected-6r59: <nil>
STEP: delete the pod
Mar 29 03:26:21.455: INFO: Waiting for pod pod-subpath-test-projected-6r59 to disappear
Mar 29 03:26:21.486: INFO: Pod pod-subpath-test-projected-6r59 no longer exists
STEP: Deleting pod pod-subpath-test-projected-6r59
Mar 29 03:26:21.486: INFO: Deleting pod "pod-subpath-test-projected-6r59" in namespace "subpath-7772"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 29 03:26:21.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7772" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":283,"completed":99,"skipped":1720,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 124 lines ...
Mar 29 03:27:15.370: INFO: ss-1  test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:26:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:26:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:26:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 03:26:42 +0000 UTC  }]
Mar 29 03:27:15.370: INFO: 
Mar 29 03:27:15.370: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4484
Mar 29 03:27:16.402: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:27:16.735: INFO: rc: 1
Mar 29 03:27:16.735: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 29 03:27:26.735: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:27:26.958: INFO: rc: 1
Mar 29 03:27:26.958: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:27:36.958: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:27:37.199: INFO: rc: 1
Mar 29 03:27:37.199: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:27:47.200: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:27:47.417: INFO: rc: 1
Mar 29 03:27:47.417: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:27:57.417: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:27:57.686: INFO: rc: 1
Mar 29 03:27:57.686: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:28:07.686: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:28:07.903: INFO: rc: 1
Mar 29 03:28:07.903: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:28:17.903: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:28:18.124: INFO: rc: 1
Mar 29 03:28:18.124: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:28:28.125: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:28:28.351: INFO: rc: 1
Mar 29 03:28:28.351: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:28:38.351: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:28:38.567: INFO: rc: 1
Mar 29 03:28:38.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:28:48.568: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:28:48.786: INFO: rc: 1
Mar 29 03:28:48.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:28:58.787: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:28:59.006: INFO: rc: 1
Mar 29 03:28:59.006: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:29:09.006: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:29:09.227: INFO: rc: 1
Mar 29 03:29:09.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:29:19.227: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:29:19.448: INFO: rc: 1
Mar 29 03:29:19.448: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:29:29.448: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:29:29.667: INFO: rc: 1
Mar 29 03:29:29.667: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:29:39.668: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:29:39.887: INFO: rc: 1
Mar 29 03:29:39.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:29:49.888: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:29:50.105: INFO: rc: 1
Mar 29 03:29:50.105: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:30:00.106: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:30:00.321: INFO: rc: 1
Mar 29 03:30:00.321: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:30:10.322: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:30:10.543: INFO: rc: 1
Mar 29 03:30:10.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:30:20.543: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:30:20.765: INFO: rc: 1
Mar 29 03:30:20.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:30:30.766: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:30:30.989: INFO: rc: 1
Mar 29 03:30:30.989: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:30:40.990: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:30:41.210: INFO: rc: 1
Mar 29 03:30:41.210: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:30:51.210: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:30:51.441: INFO: rc: 1
Mar 29 03:30:51.441: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:31:01.441: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:31:01.666: INFO: rc: 1
Mar 29 03:31:01.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:31:11.666: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:31:11.890: INFO: rc: 1
Mar 29 03:31:11.890: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:31:21.891: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:31:22.113: INFO: rc: 1
Mar 29 03:31:22.113: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:31:32.113: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:31:32.350: INFO: rc: 1
Mar 29 03:31:32.350: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:31:42.350: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:31:42.574: INFO: rc: 1
Mar 29 03:31:42.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:31:52.574: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:31:52.803: INFO: rc: 1
Mar 29 03:31:52.803: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:32:02.803: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:32:03.022: INFO: rc: 1
Mar 29 03:32:03.022: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:32:13.022: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:32:13.238: INFO: rc: 1
Mar 29 03:32:13.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 29 03:32:23.238: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4484 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 03:32:23.462: INFO: rc: 1
Mar 29 03:32:23.462: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Mar 29 03:32:23.462: INFO: Scaling statefulset ss to 0
Mar 29 03:32:23.556: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":283,"completed":100,"skipped":1721,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:32:32.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2167" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":283,"completed":101,"skipped":1723,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:32:34.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9246" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":283,"completed":102,"skipped":1739,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:32:34.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38" in namespace "downward-api-701" to be "Succeeded or Failed"
Mar 29 03:32:34.868: INFO: Pod "downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38": Phase="Pending", Reason="", readiness=false. Elapsed: 29.396511ms
Mar 29 03:32:36.898: INFO: Pod "downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059180966s
STEP: Saw pod success
Mar 29 03:32:36.898: INFO: Pod "downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38" satisfied condition "Succeeded or Failed"
Mar 29 03:32:36.929: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38 container client-container: <nil>
STEP: delete the pod
Mar 29 03:32:37.012: INFO: Waiting for pod downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38 to disappear
Mar 29 03:32:37.044: INFO: Pod downwardapi-volume-4b7f42af-a65d-4dbb-8dd7-168836e66d38 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:32:37.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-701" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":103,"skipped":1772,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-9f62b19d-74c7-430c-abaa-b19f9997cac2
STEP: Creating a pod to test consume configMaps
Mar 29 03:32:37.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b" in namespace "configmap-3194" to be "Succeeded or Failed"
Mar 29 03:32:37.381: INFO: Pod "pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.151529ms
Mar 29 03:32:39.411: INFO: Pod "pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062751279s
STEP: Saw pod success
Mar 29 03:32:39.411: INFO: Pod "pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b" satisfied condition "Succeeded or Failed"
Mar 29 03:32:39.441: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:32:39.537: INFO: Waiting for pod pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b to disappear
Mar 29 03:32:39.567: INFO: Pod pod-configmaps-759fa568-0c72-4169-a3da-f93a533fb31b no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:32:39.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3194" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":104,"skipped":1781,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 29 03:32:48.161: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 29 03:32:48.191: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 29 03:32:48.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4151" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":283,"completed":105,"skipped":1788,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 136 lines ...
Mar 29 03:33:18.959: INFO: stderr: ""
Mar 29 03:33:18.959: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:33:18.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5486" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":283,"completed":106,"skipped":1791,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:33:26.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7975" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":283,"completed":107,"skipped":1794,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 29 03:33:31.597: INFO: stderr: ""
Mar 29 03:33:31.597: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6108-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:33:34.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-685" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":283,"completed":108,"skipped":1805,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 42 lines ...
Mar 29 03:35:06.474: INFO: Waiting for statefulset status.replicas updated to 0
Mar 29 03:35:06.504: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 29 03:35:06.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6650" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":283,"completed":109,"skipped":1809,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 29 03:35:06.837: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 29 03:35:09.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2118" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":283,"completed":110,"skipped":1829,"failed":0}
SS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Mar 29 03:36:47.188: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:36:47.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-8437" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":283,"completed":111,"skipped":1831,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Mar 29 03:36:57.933: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9364 /api/v1/namespaces/watch-9364/configmaps/e2e-watch-test-label-changed de2e649c-cdff-485a-b229-36f21a8b109c 12760 0 2020-03-29 03:36:47 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 29 03:36:57.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-9364 /api/v1/namespaces/watch-9364/configmaps/e2e-watch-test-label-changed de2e649c-cdff-485a-b229-36f21a8b109c 12761 0 2020-03-29 03:36:47 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 29 03:36:57.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9364" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":283,"completed":112,"skipped":1844,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 29 03:37:02.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1561" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":283,"completed":113,"skipped":1860,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:37:02.354: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 29 03:37:02.523: INFO: Waiting up to 5m0s for pod "pod-43d9d351-a012-41ad-8eb2-7d81e486fb39" in namespace "emptydir-8235" to be "Succeeded or Failed"
Mar 29 03:37:02.556: INFO: Pod "pod-43d9d351-a012-41ad-8eb2-7d81e486fb39": Phase="Pending", Reason="", readiness=false. Elapsed: 33.202473ms
Mar 29 03:37:04.586: INFO: Pod "pod-43d9d351-a012-41ad-8eb2-7d81e486fb39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063563255s
STEP: Saw pod success
Mar 29 03:37:04.586: INFO: Pod "pod-43d9d351-a012-41ad-8eb2-7d81e486fb39" satisfied condition "Succeeded or Failed"
Mar 29 03:37:04.616: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-43d9d351-a012-41ad-8eb2-7d81e486fb39 container test-container: <nil>
STEP: delete the pod
Mar 29 03:37:04.706: INFO: Waiting for pod pod-43d9d351-a012-41ad-8eb2-7d81e486fb39 to disappear
Mar 29 03:37:04.736: INFO: Pod pod-43d9d351-a012-41ad-8eb2-7d81e486fb39 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:37:04.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8235" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":114,"skipped":1865,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 29 03:37:04.957: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:37:07.729: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:37:20.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6978" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":283,"completed":115,"skipped":1888,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 29 03:37:25.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5838" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":283,"completed":116,"skipped":1899,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:37:25.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674" in namespace "downward-api-1600" to be "Succeeded or Failed"
Mar 29 03:37:25.799: INFO: Pod "downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674": Phase="Pending", Reason="", readiness=false. Elapsed: 30.749422ms
Mar 29 03:37:27.829: INFO: Pod "downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061620801s
STEP: Saw pod success
Mar 29 03:37:27.829: INFO: Pod "downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674" satisfied condition "Succeeded or Failed"
Mar 29 03:37:27.861: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674 container client-container: <nil>
STEP: delete the pod
Mar 29 03:37:27.950: INFO: Waiting for pod downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674 to disappear
Mar 29 03:37:27.979: INFO: Pod downwardapi-volume-48f9c633-54d7-44f2-984b-3bed3e753674 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:37:27.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1600" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":117,"skipped":1939,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 29 03:37:28.069: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 29 03:37:28.232: INFO: Waiting up to 5m0s for pod "downward-api-3d831124-76f9-4078-b712-b8eae13dce86" in namespace "downward-api-5525" to be "Succeeded or Failed"
Mar 29 03:37:28.262: INFO: Pod "downward-api-3d831124-76f9-4078-b712-b8eae13dce86": Phase="Pending", Reason="", readiness=false. Elapsed: 29.873097ms
Mar 29 03:37:30.293: INFO: Pod "downward-api-3d831124-76f9-4078-b712-b8eae13dce86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060581243s
STEP: Saw pod success
Mar 29 03:37:30.293: INFO: Pod "downward-api-3d831124-76f9-4078-b712-b8eae13dce86" satisfied condition "Succeeded or Failed"
Mar 29 03:37:30.326: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downward-api-3d831124-76f9-4078-b712-b8eae13dce86 container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:37:30.420: INFO: Waiting for pod downward-api-3d831124-76f9-4078-b712-b8eae13dce86 to disappear
Mar 29 03:37:30.450: INFO: Pod downward-api-3d831124-76f9-4078-b712-b8eae13dce86 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 29 03:37:30.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5525" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":283,"completed":118,"skipped":1953,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:37:35.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4880" for this suite.
STEP: Destroying namespace "webhook-4880-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":283,"completed":119,"skipped":1963,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-5073b437-23f8-4968-b3d9-6adc5fedeef6
STEP: Creating a pod to test consume configMaps
Mar 29 03:37:36.742: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4" in namespace "projected-8305" to be "Succeeded or Failed"
Mar 29 03:37:36.782: INFO: Pod "pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4": Phase="Pending", Reason="", readiness=false. Elapsed: 39.696792ms
Mar 29 03:37:38.813: INFO: Pod "pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070715502s
STEP: Saw pod success
Mar 29 03:37:38.813: INFO: Pod "pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4" satisfied condition "Succeeded or Failed"
Mar 29 03:37:38.843: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:37:38.934: INFO: Waiting for pod pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4 to disappear
Mar 29 03:37:38.963: INFO: Pod pod-projected-configmaps-7d9b572e-10d9-436f-9a1b-fa848d02a7e4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:37:38.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8305" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":120,"skipped":1972,"failed":0}

------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 03:37:44.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9125" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":283,"completed":121,"skipped":1972,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 29 03:38:33.142: INFO: Restart count of pod container-probe-764/busybox-f9a9d4da-206c-4f23-be70-5ce370e29e65 is now 1 (46.739756594s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:38:33.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-764" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":122,"skipped":1988,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-680ff1bd-8651-47ae-9462-d61ac7c5fbae
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:38:37.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5311" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":123,"skipped":2010,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:38:44.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8242" for this suite.
STEP: Destroying namespace "webhook-8242-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":283,"completed":124,"skipped":2015,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 97 lines ...
Mar 29 03:39:07.410: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-239/pods","resourceVersion":"13681"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:39:07.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-239" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":283,"completed":125,"skipped":2018,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 29 03:39:07.921: INFO: stderr: ""
Mar 29 03:39:07.921: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.105+e03c7756c76ac8\", GitCommit:\"e03c7756c76ac8a0a484660515c0344ec8a10569\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T14:24:02Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:39:07.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9057" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":283,"completed":126,"skipped":2023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-982998dd-fa85-4893-9f3e-48f842f54f58
STEP: Creating a pod to test consume secrets
Mar 29 03:39:08.196: INFO: Waiting up to 5m0s for pod "pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6" in namespace "secrets-8732" to be "Succeeded or Failed"
Mar 29 03:39:08.229: INFO: Pod "pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.755505ms
Mar 29 03:39:10.259: INFO: Pod "pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063005504s
STEP: Saw pod success
Mar 29 03:39:10.259: INFO: Pod "pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6" satisfied condition "Succeeded or Failed"
Mar 29 03:39:10.289: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6 container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:39:10.370: INFO: Waiting for pod pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6 to disappear
Mar 29 03:39:10.400: INFO: Pod pod-secrets-841e5f0a-dfe3-41b2-9c19-dc7b743d39d6 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:39:10.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8732" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":127,"skipped":2057,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:180
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 03:39:12.852: INFO: Waiting up to 5m0s for pod "client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde" in namespace "pods-6835" to be "Succeeded or Failed"
Mar 29 03:39:12.885: INFO: Pod "client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde": Phase="Pending", Reason="", readiness=false. Elapsed: 33.308653ms
Mar 29 03:39:14.916: INFO: Pod "client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063484745s
STEP: Saw pod success
Mar 29 03:39:14.916: INFO: Pod "client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde" satisfied condition "Succeeded or Failed"
Mar 29 03:39:14.945: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde container env3cont: <nil>
STEP: delete the pod
Mar 29 03:39:15.036: INFO: Waiting for pod client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde to disappear
Mar 29 03:39:15.066: INFO: Pod client-envvars-be027f10-ede1-48b3-a282-2c481f8fbbde no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 03:39:15.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6835" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":283,"completed":128,"skipped":2065,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 03:39:15.157: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-b347f858-93ec-4ebb-a509-76130c92eea4
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:39:15.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9371" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":283,"completed":129,"skipped":2097,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 29 03:39:15.389: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 29 03:39:15.562: INFO: Waiting up to 5m0s for pod "downward-api-a0c883de-0e91-434d-ae94-04dde99ea182" in namespace "downward-api-4895" to be "Succeeded or Failed"
Mar 29 03:39:15.598: INFO: Pod "downward-api-a0c883de-0e91-434d-ae94-04dde99ea182": Phase="Pending", Reason="", readiness=false. Elapsed: 35.022763ms
Mar 29 03:39:17.628: INFO: Pod "downward-api-a0c883de-0e91-434d-ae94-04dde99ea182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065760313s
STEP: Saw pod success
Mar 29 03:39:17.628: INFO: Pod "downward-api-a0c883de-0e91-434d-ae94-04dde99ea182" satisfied condition "Succeeded or Failed"
Mar 29 03:39:17.658: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downward-api-a0c883de-0e91-434d-ae94-04dde99ea182 container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:39:17.741: INFO: Waiting for pod downward-api-a0c883de-0e91-434d-ae94-04dde99ea182 to disappear
Mar 29 03:39:17.773: INFO: Pod downward-api-a0c883de-0e91-434d-ae94-04dde99ea182 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 29 03:39:17.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4895" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":283,"completed":130,"skipped":2104,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:39:18.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125" in namespace "projected-368" to be "Succeeded or Failed"
Mar 29 03:39:18.070: INFO: Pod "downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125": Phase="Pending", Reason="", readiness=false. Elapsed: 32.506221ms
Mar 29 03:39:20.100: INFO: Pod "downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06277479s
STEP: Saw pod success
Mar 29 03:39:20.100: INFO: Pod "downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125" satisfied condition "Succeeded or Failed"
Mar 29 03:39:20.130: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125 container client-container: <nil>
STEP: delete the pod
Mar 29 03:39:20.221: INFO: Waiting for pod downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125 to disappear
Mar 29 03:39:20.253: INFO: Pod downwardapi-volume-9ccdd9e5-2d76-481c-9d4c-94f2528f4125 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:39:20.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-368" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":131,"skipped":2108,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 03:39:22.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8824" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":283,"completed":132,"skipped":2132,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-cee5753a-3e7b-4374-9c2e-d8cc0a7e957a
STEP: Creating a pod to test consume secrets
Mar 29 03:39:23.100: INFO: Waiting up to 5m0s for pod "pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551" in namespace "secrets-5492" to be "Succeeded or Failed"
Mar 29 03:39:23.130: INFO: Pod "pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551": Phase="Pending", Reason="", readiness=false. Elapsed: 29.55425ms
Mar 29 03:39:25.161: INFO: Pod "pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060381152s
STEP: Saw pod success
Mar 29 03:39:25.161: INFO: Pod "pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551" satisfied condition "Succeeded or Failed"
Mar 29 03:39:25.190: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551 container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:39:25.271: INFO: Waiting for pod pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551 to disappear
Mar 29 03:39:25.302: INFO: Pod pod-secrets-d341c637-2edf-4e29-9a0b-db4e14c6c551 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:39:25.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5492" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":133,"skipped":2151,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:39:25.401: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 29 03:39:25.575: INFO: Waiting up to 5m0s for pod "pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec" in namespace "emptydir-5699" to be "Succeeded or Failed"
Mar 29 03:39:25.609: INFO: Pod "pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec": Phase="Pending", Reason="", readiness=false. Elapsed: 33.986811ms
Mar 29 03:39:27.640: INFO: Pod "pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064756428s
STEP: Saw pod success
Mar 29 03:39:27.640: INFO: Pod "pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec" satisfied condition "Succeeded or Failed"
Mar 29 03:39:27.670: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec container test-container: <nil>
STEP: delete the pod
Mar 29 03:39:27.751: INFO: Waiting for pod pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec to disappear
Mar 29 03:39:27.782: INFO: Pod pod-ffb4bc85-6c39-4a52-ad2a-142771be21ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:39:27.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5699" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":134,"skipped":2154,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Mar 29 03:39:37.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7861" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":283,"completed":135,"skipped":2162,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
Mar 29 03:39:42.897: INFO: stdout: "service/rm3 exposed\n"
Mar 29 03:39:42.929: INFO: Service rm3 in namespace kubectl-6409 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:39:44.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6409" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":283,"completed":136,"skipped":2169,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:39:47.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9925" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":283,"completed":137,"skipped":2175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:39:47.757: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 29 03:39:47.925: INFO: Waiting up to 5m0s for pod "pod-f95602fd-782d-43d8-be24-9467724f044d" in namespace "emptydir-1445" to be "Succeeded or Failed"
Mar 29 03:39:47.960: INFO: Pod "pod-f95602fd-782d-43d8-be24-9467724f044d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.297904ms
Mar 29 03:39:49.990: INFO: Pod "pod-f95602fd-782d-43d8-be24-9467724f044d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065159263s
STEP: Saw pod success
Mar 29 03:39:49.990: INFO: Pod "pod-f95602fd-782d-43d8-be24-9467724f044d" satisfied condition "Succeeded or Failed"
Mar 29 03:39:50.023: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-f95602fd-782d-43d8-be24-9467724f044d container test-container: <nil>
STEP: delete the pod
Mar 29 03:39:50.108: INFO: Waiting for pod pod-f95602fd-782d-43d8-be24-9467724f044d to disappear
Mar 29 03:39:50.137: INFO: Pod pod-f95602fd-782d-43d8-be24-9467724f044d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:39:50.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1445" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":138,"skipped":2198,"failed":0}

------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 29 03:39:51.536: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 29 03:39:51.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6358" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":139,"skipped":2198,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:175
Mar 29 03:39:51.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9539" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":283,"completed":140,"skipped":2221,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Mar 29 03:39:58.648: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:39:58.908: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Mar 29 03:39:58.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6578" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":141,"skipped":2246,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-28bfb205-2f97-4383-b894-6a3fefd4d4b3
STEP: Creating a pod to test consume secrets
Mar 29 03:39:59.209: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f" in namespace "projected-8835" to be "Succeeded or Failed"
Mar 29 03:39:59.246: INFO: Pod "pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.710569ms
Mar 29 03:40:01.277: INFO: Pod "pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067349575s
STEP: Saw pod success
Mar 29 03:40:01.277: INFO: Pod "pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f" satisfied condition "Succeeded or Failed"
Mar 29 03:40:01.307: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:40:01.389: INFO: Waiting for pod pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f to disappear
Mar 29 03:40:01.419: INFO: Pod pod-projected-secrets-09a0c77e-f3ff-4db9-bcaf-fbfb43be702f no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 03:40:01.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8835" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":142,"skipped":2247,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:40:08.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4786" for this suite.
STEP: Destroying namespace "nsdeletetest-5921" for this suite.
Mar 29 03:40:08.141: INFO: Namespace nsdeletetest-5921 was already deleted
STEP: Destroying namespace "nsdeletetest-3594" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":283,"completed":143,"skipped":2273,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-7e8f5f48-c617-48c3-b6dd-cdf4ec981e8c
STEP: Creating a pod to test consume secrets
Mar 29 03:40:08.387: INFO: Waiting up to 5m0s for pod "pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b" in namespace "secrets-9556" to be "Succeeded or Failed"
Mar 29 03:40:08.417: INFO: Pod "pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.871827ms
Mar 29 03:40:10.447: INFO: Pod "pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060089885s
STEP: Saw pod success
Mar 29 03:40:10.447: INFO: Pod "pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b" satisfied condition "Succeeded or Failed"
Mar 29 03:40:10.477: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b container secret-env-test: <nil>
STEP: delete the pod
Mar 29 03:40:10.558: INFO: Waiting for pod pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b to disappear
Mar 29 03:40:10.588: INFO: Pod pod-secrets-27cb6e1b-e296-481a-8c09-6d1633b3682b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:40:10.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9556" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":283,"completed":144,"skipped":2303,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:175
Mar 29 03:40:10.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-365" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":283,"completed":145,"skipped":2340,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:40:10.957: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 29 03:40:11.121: INFO: Waiting up to 5m0s for pod "pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee" in namespace "emptydir-5453" to be "Succeeded or Failed"
Mar 29 03:40:11.153: INFO: Pod "pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee": Phase="Pending", Reason="", readiness=false. Elapsed: 32.463376ms
Mar 29 03:40:13.183: INFO: Pod "pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062828319s
STEP: Saw pod success
Mar 29 03:40:13.183: INFO: Pod "pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee" satisfied condition "Succeeded or Failed"
Mar 29 03:40:13.214: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee container test-container: <nil>
STEP: delete the pod
Mar 29 03:40:13.295: INFO: Waiting for pod pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee to disappear
Mar 29 03:40:13.335: INFO: Pod pod-5a592f56-c7cd-4c10-a80f-8f0f109408ee no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:40:13.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5453" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":146,"skipped":2352,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Mar 29 03:40:18.257: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-6jq2b" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-6jq2b test-rolling-update-deployment-664dd8fc7f- deployment-5342 /api/v1/namespaces/deployment-5342/pods/test-rolling-update-deployment-664dd8fc7f-6jq2b 618187db-cc08-4244-a7e7-2a5274d24e58 14573 0 2020-03-29 03:40:15 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:664dd8fc7f] map[cni.projectcalico.org/podIP:192.168.15.110/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 3892d6be-a44c-4b31-91f0-0f63e3ca07d5 0xc002b7be87 0xc002b7be88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mmlqh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mmlqh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mmlqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:40:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:40:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:40:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:40:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.15.110,StartTime:2020-03-29 03:40:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 03:40:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c82001d9dc7ee69ee0fe78afa2c668ec20984bf6731996a68f5ac61f055ab417,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.15.110,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 29 03:40:18.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5342" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":147,"skipped":2360,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 29 03:40:25.008: INFO: File wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:25.041: INFO: File jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:25.041: INFO: Lookups using dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee failed for: [wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local]

Mar 29 03:40:30.074: INFO: File wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:30.107: INFO: File jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:30.107: INFO: Lookups using dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee failed for: [wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local]

Mar 29 03:40:35.072: INFO: File wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:35.104: INFO: File jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:35.104: INFO: Lookups using dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee failed for: [wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local]

Mar 29 03:40:40.074: INFO: File wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:40.108: INFO: File jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:40.108: INFO: Lookups using dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee failed for: [wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local]

Mar 29 03:40:45.074: INFO: File wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:45.105: INFO: File jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:45.105: INFO: Lookups using dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee failed for: [wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local]

Mar 29 03:40:50.074: INFO: File wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:50.105: INFO: File jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local from pod  dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 29 03:40:50.105: INFO: Lookups using dns-5034/dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee failed for: [wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local jessie_udp@dns-test-service-3.dns-5034.svc.cluster.local]

Mar 29 03:40:55.107: INFO: DNS probes using dns-test-dc2982e6-f9bf-4e5f-9a52-123680c47fee succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5034.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5034.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:40:57.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5034" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":283,"completed":148,"skipped":2397,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Mar 29 03:41:05.207: INFO: stderr: ""
Mar 29 03:41:05.207: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:41:05.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6519" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":283,"completed":149,"skipped":2414,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 03:41:05.297: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
STEP: updating the pod
Mar 29 03:43:06.130: INFO: Successfully updated pod "var-expansion-95c18f96-1ba4-4662-945b-c4ac9ae40a88"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Mar 29 03:43:08.190: INFO: Deleting pod "var-expansion-95c18f96-1ba4-4662-945b-c4ac9ae40a88" in namespace "var-expansion-7375"
Mar 29 03:43:08.230: INFO: Wait up to 5m0s for pod "var-expansion-95c18f96-1ba4-4662-945b-c4ac9ae40a88" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 03:43:48.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7375" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":283,"completed":150,"skipped":2436,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 29 03:43:55.859: INFO: stderr: ""
Mar 29 03:43:55.859: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:43:55.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8517" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":283,"completed":151,"skipped":2522,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:43:56.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1" in namespace "downward-api-3982" to be "Succeeded or Failed"
Mar 29 03:43:56.157: INFO: Pod "downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.187901ms
Mar 29 03:43:58.188: INFO: Pod "downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066130476s
STEP: Saw pod success
Mar 29 03:43:58.188: INFO: Pod "downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1" satisfied condition "Succeeded or Failed"
Mar 29 03:43:58.217: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1 container client-container: <nil>
STEP: delete the pod
Mar 29 03:43:58.316: INFO: Waiting for pod downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1 to disappear
Mar 29 03:43:58.346: INFO: Pod downwardapi-volume-4f6aa109-cacb-447a-9cc1-14b6b381a9b1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 03:43:58.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3982" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":152,"skipped":2527,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 29 03:43:58.435: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Mar 29 03:43:58.608: INFO: Waiting up to 5m0s for pod "var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9" in namespace "var-expansion-9454" to be "Succeeded or Failed"
Mar 29 03:43:58.637: INFO: Pod "var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.509184ms
Mar 29 03:44:00.670: INFO: Pod "var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062135111s
STEP: Saw pod success
Mar 29 03:44:00.670: INFO: Pod "var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9" satisfied condition "Succeeded or Failed"
Mar 29 03:44:00.700: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9 container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:44:00.803: INFO: Waiting for pod var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9 to disappear
Mar 29 03:44:00.834: INFO: Pod var-expansion-6b1bb4fc-b7b9-4cbe-acfd-7bc02ef9d1c9 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 03:44:00.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9454" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":283,"completed":153,"skipped":2529,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Mar 29 03:44:08.588: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Mar 29 03:44:08.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-9269" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":283,"completed":154,"skipped":2536,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-aa499866-f101-415b-b334-adcba955a7d8
STEP: Creating a pod to test consume secrets
Mar 29 03:44:08.919: INFO: Waiting up to 5m0s for pod "pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420" in namespace "secrets-6939" to be "Succeeded or Failed"
Mar 29 03:44:08.951: INFO: Pod "pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420": Phase="Pending", Reason="", readiness=false. Elapsed: 32.255736ms
Mar 29 03:44:10.981: INFO: Pod "pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062290894s
STEP: Saw pod success
Mar 29 03:44:10.981: INFO: Pod "pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420" satisfied condition "Succeeded or Failed"
Mar 29 03:44:11.012: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420 container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:44:11.091: INFO: Waiting for pod pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420 to disappear
Mar 29 03:44:11.122: INFO: Pod pod-secrets-65621e8b-0b41-4e8d-9446-8f343e35d420 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:44:11.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6939" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":155,"skipped":2539,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 29 03:44:51.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0329 03:44:51.587027   24871 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-3719" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":283,"completed":156,"skipped":2566,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:44:51.661: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 29 03:44:51.832: INFO: Waiting up to 5m0s for pod "pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c" in namespace "emptydir-6001" to be "Succeeded or Failed"
Mar 29 03:44:51.863: INFO: Pod "pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.228996ms
Mar 29 03:44:53.892: INFO: Pod "pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060585823s
STEP: Saw pod success
Mar 29 03:44:53.892: INFO: Pod "pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c" satisfied condition "Succeeded or Failed"
Mar 29 03:44:53.922: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c container test-container: <nil>
STEP: delete the pod
Mar 29 03:44:54.011: INFO: Waiting for pod pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c to disappear
Mar 29 03:44:54.041: INFO: Pod pod-0fe5c111-b15a-4c89-8a4b-a32200a8994c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:44:54.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6001" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":157,"skipped":2590,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:45:00.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4888" for this suite.
STEP: Destroying namespace "webhook-4888-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":283,"completed":158,"skipped":2590,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Mar 29 03:45:19.885: INFO: Pod "test-rollover-deployment-78df7bc796-k5q64" is available:
&Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-k5q64 test-rollover-deployment-78df7bc796- deployment-4603 /api/v1/namespaces/deployment-4603/pods/test-rollover-deployment-78df7bc796-k5q64 e48acf39-8c92-4472-9ca2-4b633a0b4724 16068 0 2020-03-29 03:45:07 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78df7bc796] map[cni.projectcalico.org/podIP:192.168.15.122/32] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 ff292ffd-32d2-43cf-b836-12681175d437 0xc003a53e07 0xc003a53e08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w5nmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w5nmq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w5nmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:45:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:45:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-29 03:45:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.15.122,StartTime:2020-03-29 03:45:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-29 03:45:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://4787f26cbc7da83f5db821cbe9fa5f83fc52f0d1ff7ea32121d7ef573ba7a4ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.15.122,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 29 03:45:19.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4603" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":283,"completed":159,"skipped":2591,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:45:36.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7800" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":283,"completed":160,"skipped":2593,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 03:45:36.727: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 03:47:36.986: INFO: Deleting pod "var-expansion-ce13b692-fa6c-4ac2-b813-fcde442b2c16" in namespace "var-expansion-2226"
Mar 29 03:47:37.023: INFO: Wait up to 5m0s for pod "var-expansion-ce13b692-fa6c-4ac2-b813-fcde442b2c16" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 03:47:39.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2226" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":283,"completed":161,"skipped":2605,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-dd6d2a0b-18d8-4cdc-88db-0ecaceab6269
STEP: Creating a pod to test consume configMaps
Mar 29 03:47:39.382: INFO: Waiting up to 5m0s for pod "pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175" in namespace "configmap-4095" to be "Succeeded or Failed"
Mar 29 03:47:39.413: INFO: Pod "pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175": Phase="Pending", Reason="", readiness=false. Elapsed: 30.752297ms
Mar 29 03:47:41.443: INFO: Pod "pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060927103s
STEP: Saw pod success
Mar 29 03:47:41.443: INFO: Pod "pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175" satisfied condition "Succeeded or Failed"
Mar 29 03:47:41.472: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:47:41.561: INFO: Waiting for pod pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175 to disappear
Mar 29 03:47:41.592: INFO: Pod pod-configmaps-60219147-d7a7-49a9-9d2d-0b11bf6ed175 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:47:41.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4095" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":162,"skipped":2627,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 8 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 29 03:47:44.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8508" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":283,"completed":163,"skipped":2718,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:47:51.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3243" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":283,"completed":164,"skipped":2733,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:47:51.667: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 29 03:47:51.849: INFO: Waiting up to 5m0s for pod "pod-2a3f61fa-563e-49ee-af96-f03fca0bd599" in namespace "emptydir-7887" to be "Succeeded or Failed"
Mar 29 03:47:51.883: INFO: Pod "pod-2a3f61fa-563e-49ee-af96-f03fca0bd599": Phase="Pending", Reason="", readiness=false. Elapsed: 34.012717ms
Mar 29 03:47:53.914: INFO: Pod "pod-2a3f61fa-563e-49ee-af96-f03fca0bd599": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06447306s
STEP: Saw pod success
Mar 29 03:47:53.914: INFO: Pod "pod-2a3f61fa-563e-49ee-af96-f03fca0bd599" satisfied condition "Succeeded or Failed"
Mar 29 03:47:53.945: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-2a3f61fa-563e-49ee-af96-f03fca0bd599 container test-container: <nil>
STEP: delete the pod
Mar 29 03:47:54.024: INFO: Waiting for pod pod-2a3f61fa-563e-49ee-af96-f03fca0bd599 to disappear
Mar 29 03:47:54.055: INFO: Pod pod-2a3f61fa-563e-49ee-af96-f03fca0bd599 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:47:54.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7887" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":165,"skipped":2733,"failed":0}

------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-4e0952d6-7985-459d-9d1b-5f9235b503e7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:47:58.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7238" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":166,"skipped":2733,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-8580921c-3ee0-4981-87e6-45855f3b7c61
STEP: Creating a pod to test consume secrets
Mar 29 03:47:59.175: INFO: Waiting up to 5m0s for pod "pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e" in namespace "secrets-1672" to be "Succeeded or Failed"
Mar 29 03:47:59.209: INFO: Pod "pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.838276ms
Mar 29 03:48:01.240: INFO: Pod "pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064265628s
STEP: Saw pod success
Mar 29 03:48:01.240: INFO: Pod "pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e" satisfied condition "Succeeded or Failed"
Mar 29 03:48:01.270: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:48:01.368: INFO: Waiting for pod pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e to disappear
Mar 29 03:48:01.398: INFO: Pod pod-secrets-7caa582d-ccfe-4837-bad0-11ce29b3fc0e no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:48:01.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1672" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":167,"skipped":2735,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-a273412c-881e-40f0-8ca9-3cc807315ca8
STEP: Creating a pod to test consume configMaps
Mar 29 03:48:01.693: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d" in namespace "configmap-9223" to be "Succeeded or Failed"
Mar 29 03:48:01.723: INFO: Pod "pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d": Phase="Pending", Reason="", readiness=false. Elapsed: 29.672924ms
Mar 29 03:48:03.754: INFO: Pod "pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060152932s
STEP: Saw pod success
Mar 29 03:48:03.754: INFO: Pod "pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d" satisfied condition "Succeeded or Failed"
Mar 29 03:48:03.783: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:48:03.869: INFO: Waiting for pod pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d to disappear
Mar 29 03:48:03.900: INFO: Pod pod-configmaps-6ed93301-b352-4f91-b02d-597a8e11073d no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:48:03.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9223" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":168,"skipped":2743,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:48:15.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1480" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":283,"completed":169,"skipped":2761,"failed":0}

------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 55 lines ...
Mar 29 03:50:07.410: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2832/pods","resourceVersion":"17210"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:50:07.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2832" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":283,"completed":170,"skipped":2761,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-5276/configmap-test-e87ab1cc-232a-4438-8a77-656677d7184d
STEP: Creating a pod to test consume configMaps
Mar 29 03:50:07.802: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae" in namespace "configmap-5276" to be "Succeeded or Failed"
Mar 29 03:50:07.832: INFO: Pod "pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae": Phase="Pending", Reason="", readiness=false. Elapsed: 29.933398ms
Mar 29 03:50:09.862: INFO: Pod "pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060005838s
STEP: Saw pod success
Mar 29 03:50:09.862: INFO: Pod "pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae" satisfied condition "Succeeded or Failed"
Mar 29 03:50:09.893: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae container env-test: <nil>
STEP: delete the pod
Mar 29 03:50:09.996: INFO: Waiting for pod pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae to disappear
Mar 29 03:50:10.026: INFO: Pod pod-configmaps-d3af77f7-3bf7-4629-8701-9d4baf21ddae no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 03:50:10.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5276" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":283,"completed":171,"skipped":2774,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 55 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 29 03:50:14.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1231" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":283,"completed":172,"skipped":2780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:50:14.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6193" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":283,"completed":173,"skipped":2827,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5256
STEP: Creating statefulset with conflicting port in namespace statefulset-5256
STEP: Waiting until pod test-pod will start running in namespace statefulset-5256
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5256
Mar 29 03:50:18.910: INFO: Observed stateful pod in namespace: statefulset-5256, name: ss-0, uid: 901a551e-3b8a-4b4d-8461-0efe5971166b, status phase: Pending. Waiting for statefulset controller to delete.
Mar 29 03:50:19.797: INFO: Observed stateful pod in namespace: statefulset-5256, name: ss-0, uid: 901a551e-3b8a-4b4d-8461-0efe5971166b, status phase: Failed. Waiting for statefulset controller to delete.
Mar 29 03:50:19.812: INFO: Observed stateful pod in namespace: statefulset-5256, name: ss-0, uid: 901a551e-3b8a-4b4d-8461-0efe5971166b, status phase: Failed. Waiting for statefulset controller to delete.
Mar 29 03:50:19.828: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5256
STEP: Removing pod with conflicting port in namespace statefulset-5256
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5256 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:110
Mar 29 03:50:23.984: INFO: Deleting all statefulset in ns statefulset-5256
Mar 29 03:50:24.015: INFO: Scaling statefulset ss to 0
Mar 29 03:50:44.145: INFO: Waiting for statefulset status.replicas updated to 0
Mar 29 03:50:44.175: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 29 03:50:44.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5256" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":283,"completed":174,"skipped":2843,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 03:50:44.542: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-858fa24e-5240-4d33-9a4a-16b0654cbfda" in namespace "security-context-test-4470" to be "Succeeded or Failed"
Mar 29 03:50:44.576: INFO: Pod "alpine-nnp-false-858fa24e-5240-4d33-9a4a-16b0654cbfda": Phase="Pending", Reason="", readiness=false. Elapsed: 33.820718ms
Mar 29 03:50:46.607: INFO: Pod "alpine-nnp-false-858fa24e-5240-4d33-9a4a-16b0654cbfda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064521581s
Mar 29 03:50:46.607: INFO: Pod "alpine-nnp-false-858fa24e-5240-4d33-9a4a-16b0654cbfda" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 29 03:50:46.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4470" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":175,"skipped":2861,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 03:50:46.746: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 29 03:50:46.914: INFO: Waiting up to 5m0s for pod "pod-7b6cf443-7e55-4745-85af-c9ad0839790f" in namespace "emptydir-4387" to be "Succeeded or Failed"
Mar 29 03:50:46.945: INFO: Pod "pod-7b6cf443-7e55-4745-85af-c9ad0839790f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.225217ms
Mar 29 03:50:48.975: INFO: Pod "pod-7b6cf443-7e55-4745-85af-c9ad0839790f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060822342s
STEP: Saw pod success
Mar 29 03:50:48.975: INFO: Pod "pod-7b6cf443-7e55-4745-85af-c9ad0839790f" satisfied condition "Succeeded or Failed"
Mar 29 03:50:49.005: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-7b6cf443-7e55-4745-85af-c9ad0839790f container test-container: <nil>
STEP: delete the pod
Mar 29 03:50:49.085: INFO: Waiting for pod pod-7b6cf443-7e55-4745-85af-c9ad0839790f to disappear
Mar 29 03:50:49.117: INFO: Pod pod-7b6cf443-7e55-4745-85af-c9ad0839790f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 03:50:49.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4387" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":176,"skipped":2880,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Mar 29 03:50:49.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7854 /api/v1/namespaces/watch-7854/configmaps/e2e-watch-test-resource-version 600bf24f-51e2-4fcb-a72c-c2402ad1d5bf 17608 0 2020-03-29 03:50:49 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 29 03:50:49.581: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7854 /api/v1/namespaces/watch-7854/configmaps/e2e-watch-test-resource-version 600bf24f-51e2-4fcb-a72c-c2402ad1d5bf 17609 0 2020-03-29 03:50:49 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 29 03:50:49.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7854" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":283,"completed":177,"skipped":2900,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 03:50:49.649: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 29 03:50:55.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4448" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":283,"completed":178,"skipped":2918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 03:51:13.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9524" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":283,"completed":179,"skipped":2949,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 15 lines ...
Mar 29 03:51:22.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 29 03:51:24.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 29 03:51:26.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721050674, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 29 03:52:28.698: INFO: Waited 1m0.23773054s for the sample-apiserver to be ready to handle requests.
Mar 29 03:52:28.698: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","selfLink":"/apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.wardle.example.com","uid":"622489df-e71d-4dc6-9ec2-6ef515a544d2","resourceVersion":"17944","creationTimestamp":"2020-03-29T03:51:28Z"},"spec":{"service":{"namespace":"aggregator-6636","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpBd016STVNRE0xTVRFeldoY05NekF3TXpJM01ETTFNVEV6V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURFSDdyNzd2dlNEbHlPNndKNUJXVjdoTk9TUStiSW01YnRJY3FLUkpFaXFRUGQKSngwMDNablFCZnVTaStTYTJsMlNxUk1RV2xQS29yeHZhVk51c3lDQVhxa0JxOFlUUllENmlVT0RScHZrdndoeQpQd0lXa0U2M1ViVkhFNG96V2s0Rks5VUcrRHBlSjJWbCtTeWhWbU9NUjFUWW5mT3plakpYcXRLVzhaWUhWRWFjCkd4RWJJYnd6cUk4Sk4wUk9MYTBHOTlwLzkzU21ETERaOGtsTXdLZElyUW9NejZOWC9TSFV6OGp0R0tvbC9tMjIKeHVlK0ZucVdJc2Z3d3oyb2FaZ3VCdFE5VjN6RVo0MDJPRUVDY0k3Mk0rQk5yUGpKaVI2MDBWdU1uMk1oL2lkaApsYlZtbXBzWCsxUlo4bXlZdTd5NXVRWUp1MytvOE9UUjZuU01vNWVEQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDZ0R2ekkKejJtcjN5dWQ4ekpRbnBac3pTcEVjMWhTTjNIK1B0L3RwZWl3a2dEOGRCNUY2ZkF1Qi9FcHIveUVBMDlEb1REMwpjQUcvcTFmckdVVC92Z0N0UllnaGV6RUxJRGhFd2JXTS8zRklmSUUzL2NkUkZrUEhBOVJsRW14NU50TnBnalVNCnpLQTJiL3FaWjYvNHNHWVlJaGp2Z3JNUGkrUml3ajlleTEyc2gvWlc5SEkwV01Pa2lqb1FCVWRlWnkzM3Y5TXcKSGFicHF6S0VzbWxDYlovWkFPeGNCek9ZeEhxcDlIUExMMm1hczZZQ3pqU1Azd3JKK0xuR002UDNrWjNHMlZyNAoxdkk1ZUU4YWhWdmU3M2ptNVh3RVpnWDFQYk5OcUh4b05xVVByeVp2V24wWUs0OENlUE1oRFllUHpyY1RiSWYrCjY1OTVBdldocTkzYW1hQnEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2020-03-29T03:51:28Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.98.111.18:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.98.111.18:7443/apis/wardle.example.com/v1alpha1: 403"}]}}
Mar 29 03:52:28.698: INFO: current pods: {"metadata":{"selfLink":"/api/v1/namespaces/aggregator-6636/pods","resourceVersion":"18060"},"items":[{"metadata":{"name":"sample-apiserver-deployment-54b47bf96b-lgdfq","generateName":"sample-apiserver-deployment-54b47bf96b-","namespace":"aggregator-6636","selfLink":"/api/v1/namespaces/aggregator-6636/pods/sample-apiserver-deployment-54b47bf96b-lgdfq","uid":"b7afe4bc-3316-4d32-b561-ad1918684b3a","resourceVersion":"17935","creationTimestamp":"2020-03-29T03:51:14Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"54b47bf96b"},"annotations":{"cni.projectcalico.org/podIP":"192.168.15.71/32"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-54b47bf96b","uid":"16167dad-4d24-41ed-bc16-67191fb21c86","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"default-token-cq6s5","secret":{"secretName":"default-token-cq6s5","defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"default-token-cq6s5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.4","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"default-token-cq6s5","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-29T03:51:14Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-29T03:51:26Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-29T03:51:26Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-03-29T03:51:14Z"}],"hostIP":"10.150.0.3","podIP":"192.168.15.71","podIPs":[{"ip":"192.168.15.71"}],"startTime":"2020-03-29T03:51:14Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2020-03-29T03:51:26Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.4","imageID":"k8s.gcr.io/etcd@sha256:e10ee22e7b56d08b7cb7da2a390863c445d66a7284294cee8c9decbfb3ba4359","containerID":"containerd://b394d2a55f4df8e43ad7b84b162f9b191bdd17b1986f00b4b60f6799390ac300","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2020-03-29T03:51:17Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17","imageID":"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55","containerID":"containerd://c87dc86b19880dbb023f71ad5e91abbab22bc0f1eafc13e73bbab189914b8776","started":true}],"qosClass":"BestEffort"}}]}
Mar 29 03:52:28.791: INFO: logs of sample-apiserver-deployment-54b47bf96b-lgdfq/sample-apiserver (error: <nil>): W0329 03:51:17.652102       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0329 03:51:17.652194       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0329 03:51:17.668895       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
I0329 03:51:17.669145       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
I0329 03:51:17.671274       1 client.go:361] parsed scheme: "endpoint"
I0329 03:51:17.671313       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0329 03:51:17.671733       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0329 03:51:18.318908       1 client.go:361] parsed scheme: "endpoint"
I0329 03:51:18.319022       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0329 03:51:18.319363       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 03:51:18.672155       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 03:51:19.319806       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 03:51:20.428643       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 03:51:20.928416       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 03:51:22.878771       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0329 03:51:23.195100       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0329 03:51:27.778411       1 client.go:361] parsed scheme: "endpoint"
I0329 03:51:27.778452       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0329 03:51:27.779594       1 client.go:361] parsed scheme: "endpoint"
I0329 03:51:27.779622       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0329 03:51:27.781509       1 client.go:361] parsed scheme: "endpoint"
I0329 03:51:27.781548       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0329 03:51:27.840811       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0329 03:51:27.841062       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0329 03:51:27.841218       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 03:51:27.841319       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0329 03:51:27.841835       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
I0329 03:51:27.843465       1 secure_serving.go:178] Serving securely on [::]:443
I0329 03:51:27.844393       1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0329 03:51:27.847371       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:27.851179       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:28.849498       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:28.852735       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:29.851553       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:29.854609       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:30.853679       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:30.856189       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:31.855402       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:31.858325       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:32.857678       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:32.860117       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:33.860916       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:33.861998       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:34.862824       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:34.863730       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:35.864698       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:35.867092       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:36.866675       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:36.870000       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:37.868756       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:37.871718       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:38.870622       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:38.873447       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:39.872888       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:39.875060       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:40.874961       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:40.876556       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:41.877098       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:41.879540       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:42.879304       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:42.881279       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:43.881404       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:43.882910       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:44.883550       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:44.884945       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:45.885255       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:45.888177       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:46.888780       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:46.889715       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:47.892403       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:47.892542       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:48.894384       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:48.894930       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:49.898136       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:49.898136       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:50.900270       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:50.900802       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:51.902309       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:51.904164       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:52.904394       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:52.907057       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:53.906491       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:53.909384       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:54.908624       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:54.911200       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:55.910589       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:55.912962       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:56.912747       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:56.914653       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:57.915376       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:57.917722       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:58.917333       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:58.919296       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:59.919586       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:51:59.922496       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:00.921854       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:00.925645       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:01.923998       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:01.927478       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:02.925932       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:02.929143       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:03.928039       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:03.930962       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:04.930317       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:04.932785       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:05.932560       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:05.935755       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:06.934556       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:06.937949       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:07.937095       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:07.939625       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:08.939263       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:08.943096       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:09.942761       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:09.946336       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:10.946085       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:10.952532       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:11.948065       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:11.954575       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:12.950093       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:12.956453       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:13.952163       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:13.958288       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:14.954050       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:14.960404       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:15.956182       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:15.962450       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:16.958525       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:16.964256       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:17.960844       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:17.967310       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:18.962955       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:18.969182       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:19.965234       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:19.971215       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:20.967162       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:20.973087       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:21.969383       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:21.974845       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:22.971266       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:22.976651       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:23.973304       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:23.978460       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:24.976478       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:24.980301       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:25.978849       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:25.982557       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:26.981170       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:26.984340       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:27.983582       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0329 03:52:27.986156       1 reflector.go:156] pkg/mod/k8s.io/client-go@v0.17.0/tools/cache/reflector.go:108: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:aggregator-6636:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

Mar 29 03:52:28.827: INFO: logs of sample-apiserver-deployment-54b47bf96b-lgdfq/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-29 03:51:26.590057 I | etcdmain: etcd Version: 3.4.4
2020-03-29 03:51:26.590423 I | etcdmain: Git SHA: c65a9e2dd
2020-03-29 03:51:26.590430 I | etcdmain: Go Version: go1.12.12
2020-03-29 03:51:26.590436 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-29 03:51:26.590559 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-03-29 03:51:26.590571 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
2020-03-29 03:51:27.205073 I | etcdserver: setting up the initial cluster version to 3.4
2020-03-29 03:51:27.205231 I | embed: ready to serve client requests
2020-03-29 03:51:27.205717 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-29 03:51:27.205909 I | etcdserver/api: enabled capabilities for version 3.4
2020-03-29 03:51:27.206593 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!

Mar 29 03:52:28.827: FAIL: gave up waiting for apiservice wardle to come up successfully
Unexpected error:
    <*errors.errorString | 0xc0000ddff0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 153 lines ...
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  test/e2e/framework/framework.go:597

  Mar 29 03:52:28.827: gave up waiting for apiservice wardle to come up successfully
  Unexpected error:
      <*errors.errorString | 0xc0000ddff0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  test/e2e/apimachinery/aggregator.go:401
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":283,"completed":179,"skipped":2978,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 03:52:42.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-116" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":283,"completed":180,"skipped":2979,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:52:43.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896" in namespace "projected-2615" to be "Succeeded or Failed"
Mar 29 03:52:43.081: INFO: Pod "downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896": Phase="Pending", Reason="", readiness=false. Elapsed: 30.630292ms
Mar 29 03:52:45.111: INFO: Pod "downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061060219s
STEP: Saw pod success
Mar 29 03:52:45.111: INFO: Pod "downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896" satisfied condition "Succeeded or Failed"
Mar 29 03:52:45.142: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896 container client-container: <nil>
STEP: delete the pod
Mar 29 03:52:45.229: INFO: Waiting for pod downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896 to disappear
Mar 29 03:52:45.261: INFO: Pod downwardapi-volume-4254e3d8-14fb-4cc0-aefc-29bbe781a896 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:52:45.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2615" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":181,"skipped":2992,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-3560/secret-test-d0b709ea-5884-46ef-9422-63ace736a362
STEP: Creating a pod to test consume secrets
Mar 29 03:52:45.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e" in namespace "secrets-3560" to be "Succeeded or Failed"
Mar 29 03:52:45.590: INFO: Pod "pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.491431ms
Mar 29 03:52:47.620: INFO: Pod "pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061256317s
STEP: Saw pod success
Mar 29 03:52:47.620: INFO: Pod "pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e" satisfied condition "Succeeded or Failed"
Mar 29 03:52:47.650: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e container env-test: <nil>
STEP: delete the pod
Mar 29 03:52:47.731: INFO: Waiting for pod pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e to disappear
Mar 29 03:52:47.763: INFO: Pod pod-configmaps-cefe4211-6f0d-4f11-a532-efcb0464ab4e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 29 03:52:47.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3560" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":182,"skipped":3012,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Mar 29 03:52:50.412: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 03:52:50.681: INFO: Deleting pod dns-7143...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 29 03:52:50.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7143" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":283,"completed":183,"skipped":3017,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 29 03:52:50.821: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Mar 29 03:52:50.997: INFO: Waiting up to 5m0s for pod "var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e" in namespace "var-expansion-9453" to be "Succeeded or Failed"
Mar 29 03:52:51.030: INFO: Pod "var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.333368ms
Mar 29 03:52:53.061: INFO: Pod "var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063519767s
STEP: Saw pod success
Mar 29 03:52:53.061: INFO: Pod "var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e" satisfied condition "Succeeded or Failed"
Mar 29 03:52:53.092: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e container dapi-container: <nil>
STEP: delete the pod
Mar 29 03:52:53.172: INFO: Waiting for pod var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e to disappear
Mar 29 03:52:53.202: INFO: Pod var-expansion-2befe30f-d626-4898-a24d-5008080fbe9e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 03:52:53.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9453" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":283,"completed":184,"skipped":3022,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 03:52:53.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08" in namespace "projected-6391" to be "Succeeded or Failed"
Mar 29 03:52:53.498: INFO: Pod "downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08": Phase="Pending", Reason="", readiness=false. Elapsed: 28.998634ms
Mar 29 03:52:55.529: INFO: Pod "downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059651756s
STEP: Saw pod success
Mar 29 03:52:55.529: INFO: Pod "downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08" satisfied condition "Succeeded or Failed"
Mar 29 03:52:55.559: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08 container client-container: <nil>
STEP: delete the pod
Mar 29 03:52:55.643: INFO: Waiting for pod downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08 to disappear
Mar 29 03:52:55.673: INFO: Pod downwardapi-volume-8de167aa-efa7-43ce-97ef-f82e27eebe08 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 03:52:55.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6391" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":185,"skipped":3050,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 29 03:52:58.044: INFO: Initial restart count of pod liveness-dae49fbc-6615-4d27-b98f-edf1f96b53e3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:56:59.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-882" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":283,"completed":186,"skipped":3057,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 29 03:57:03.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8745" for this suite.
STEP: Destroying namespace "webhook-8745-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":283,"completed":187,"skipped":3062,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-eaa37d73-4cbd-4c1d-93e1-4e75e355a4f6
STEP: Creating a pod to test consume configMaps
Mar 29 03:57:04.475: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5" in namespace "projected-2690" to be "Succeeded or Failed"
Mar 29 03:57:04.505: INFO: Pod "pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5": Phase="Pending", Reason="", readiness=false. Elapsed: 29.742698ms
Mar 29 03:57:06.538: INFO: Pod "pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062334113s
STEP: Saw pod success
Mar 29 03:57:06.538: INFO: Pod "pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5" satisfied condition "Succeeded or Failed"
Mar 29 03:57:06.567: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 03:57:06.689: INFO: Waiting for pod pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5 to disappear
Mar 29 03:57:06.719: INFO: Pod pod-projected-configmaps-c4ff8086-d7e0-44c6-8cde-a120a1162da5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 03:57:06.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2690" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":188,"skipped":3066,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-32d85451-c4bf-4188-9f13-9e5dbe30573c
STEP: Creating a pod to test consume secrets
Mar 29 03:57:07.015: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591" in namespace "projected-3590" to be "Succeeded or Failed"
Mar 29 03:57:07.046: INFO: Pod "pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591": Phase="Pending", Reason="", readiness=false. Elapsed: 31.338305ms
Mar 29 03:57:09.078: INFO: Pod "pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063133752s
STEP: Saw pod success
Mar 29 03:57:09.078: INFO: Pod "pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591" satisfied condition "Succeeded or Failed"
Mar 29 03:57:09.110: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:57:09.214: INFO: Waiting for pod pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591 to disappear
Mar 29 03:57:09.247: INFO: Pod pod-projected-secrets-96e5448f-e3f7-45d6-9889-892164c97591 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 03:57:09.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3590" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":189,"skipped":3091,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 20 lines ...
Mar 29 03:57:11.548: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 29 03:57:11.548: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe pod agnhost-master-cqw5z --namespace=kubectl-5020'
Mar 29 03:57:11.807: INFO: stderr: ""
Mar 29 03:57:11.807: INFO: stdout: "Name:         agnhost-master-cqw5z\nNamespace:    kubectl-5020\nPriority:     0\nNode:         test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal/10.150.0.6\nStart Time:   Sun, 29 Mar 2020 03:57:10 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  cni.projectcalico.org/podIP: 192.168.234.45/32\nStatus:       Running\nIP:           192.168.234.45\nIPs:\n  IP:           192.168.234.45\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://b871bd87d27451103d87855d2f3b2ba3a104db698ab9aad3ff63ca888b473fa8\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 29 Mar 2020 03:57:10 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vbqd7 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-vbqd7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vbqd7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                                           Message\n  ----    ------     ----       ----                                                           -------\n  Normal  Scheduled  <unknown>  default-scheduler                                              Successfully assigned kubectl-5020/agnhost-master-cqw5z to test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal\n  Normal  Pulled     1s         kubelet, test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    1s         kubelet, test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal  Created container agnhost-master\n  Normal  Started    1s         kubelet, test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal  Started container agnhost-master\n"
Mar 29 03:57:11.807: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe rc agnhost-master --namespace=kubectl-5020'
Mar 29 03:57:12.126: INFO: stderr: ""
Mar 29 03:57:12.126: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5020\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-master-cqw5z\n"
Mar 29 03:57:12.126: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe service agnhost-master --namespace=kubectl-5020'
Mar 29 03:57:12.426: INFO: stderr: ""
Mar 29 03:57:12.426: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5020\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.101.34.36\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.234.45:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 29 03:57:12.482: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal'
Mar 29 03:57:12.845: INFO: stderr: ""
Mar 29 03:57:12.845: INFO: stdout: "Name:               test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-2\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east4\n                    failure-domain.beta.kubernetes.io/zone=us-east4-a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.150.0.2/32\n                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.46.128\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 29 Mar 2020 02:56:34 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal\n  AcquireTime:     <unset>\n  RenewTime:       Sun, 29 Mar 2020 03:57:08 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 29 Mar 2020 02:57:36 +0000   Sun, 29 Mar 2020 02:57:36 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Sun, 29 Mar 2020 03:56:19 +0000   Sun, 29 Mar 2020 02:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 29 Mar 2020 03:56:19 +0000   Sun, 29 Mar 2020 02:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 29 Mar 2020 03:56:19 +0000   Sun, 29 Mar 2020 02:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 29 Mar 2020 03:56:19 +0000   Sun, 29 Mar 2020 02:57:04 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.150.0.2\n  ExternalIP:   \n  InternalDNS:  test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal\n  Hostname:     test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          30308240Ki\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7648892Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          27932073938\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7546492Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 6b8497bd24ef9771cb4dda4120ac6c81\n  System UUID:                6b8497bd-24ef-9771-cb4d-da4120ac6c81\n  Boot ID:                    2e173cd4-d5b7-4147-8a45-e29c60f2970c\n  Kernel Version:             5.0.0-1033-gcp\n  OS Image:                   Ubuntu 18.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3\n  Kubelet Version:            v1.16.2\n  Kube-Proxy Version:         v1.16.2\nProviderID:                   gce://k8s-e2e-gci-gce-alpha1-5/us-east4-a/test1-controlplane-0\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                                ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-kube-controllers-564b6667d7-z896v                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 calico-node-9hvrg                                                                   250m (12%)    0 (0%)      0 (0%)           0 (0%)         60m\n  kube-system                 coredns-5644d7b6d9-pxtn9                                                            100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59m\n  kube-system                 coredns-5644d7b6d9-wpgvw                                                            100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59m\n  kube-system                 etcd-test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m\n  kube-system                 kube-apiserver-test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59m\n  kube-system                 kube-controller-manager-test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59m\n  kube-system                 kube-proxy-dvtvk                                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59m\n  kube-system                 kube-scheduler-test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        1 (50%)     0 (0%)\n  memory                     140Mi (1%)  340Mi (4%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-1Gi              0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:\n  Type     Reason                   Age                From                                                                  Message\n  ----     ------                   ----               ----                                                                  -------\n  Normal   Starting                 62m                kubelet, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal     Starting kubelet.\n  Warning  InvalidDiskCapacity      62m                kubelet, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  62m (x8 over 62m)  kubelet, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal     Node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    62m (x7 over 62m)  kubelet, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal     Node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     62m (x7 over 62m)  kubelet, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal     Node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  62m                kubelet, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal     Updated Node Allocatable limit across pods\n  Normal   Starting                 59m                kube-proxy, test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal  Starting kube-proxy.\n"
Mar 29 03:57:12.845: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe namespace kubectl-5020'
Mar 29 03:57:13.129: INFO: stderr: ""
Mar 29 03:57:13.129: INFO: stdout: "Name:         kubectl-5020\nLabels:       e2e-framework=kubectl\n              e2e-run=7b25233e-14d7-4c6a-9869-f5fa4150e456\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 03:57:13.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5020" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":283,"completed":190,"skipped":3153,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 03:57:13.221: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 29 03:57:13.355: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 29 03:57:16.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-380" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":283,"completed":191,"skipped":3156,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 416 lines ...
Mar 29 03:57:26.131: INFO: 99 %ile: 786.960768ms
Mar 29 03:57:26.131: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Mar 29 03:57:26.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-6759" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":283,"completed":192,"skipped":3175,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-7126df8a-6b2f-4376-a5e9-7537377f7cb7
STEP: Creating a pod to test consume secrets
Mar 29 03:57:26.439: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc" in namespace "projected-2441" to be "Succeeded or Failed"
Mar 29 03:57:26.471: INFO: Pod "pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.699235ms
Mar 29 03:57:28.502: INFO: Pod "pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06319167s
STEP: Saw pod success
Mar 29 03:57:28.502: INFO: Pod "pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc" satisfied condition "Succeeded or Failed"
Mar 29 03:57:28.532: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 29 03:57:28.611: INFO: Waiting for pod pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc to disappear
Mar 29 03:57:28.641: INFO: Pod pod-projected-secrets-16dab6af-f3f2-4c47-a64a-16ea98e0f3bc no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 03:57:28.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2441" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":193,"skipped":3175,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Mar 29 03:59:53.233: INFO: Restart count of pod container-probe-5758/liveness-c612ba0c-a39d-44c7-b233-02c09ffe2247 is now 5 (2m22.233201722s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 03:59:53.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5758" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":283,"completed":194,"skipped":3182,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Mar 29 04:00:44.012: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-29T04:00:03Z generation:2 name:name2 resourceVersion:20974 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0d453141-6871-4479-b370-d7b11ae68877] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:00:54.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-555" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":283,"completed":195,"skipped":3182,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 29 04:00:55.020: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:00:56.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5044" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":283,"completed":196,"skipped":3234,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 04:00:56.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7659" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":283,"completed":197,"skipped":3238,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Mar 29 04:00:56.771: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 29 04:00:59.062: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 29 04:00:59.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1626" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":198,"skipped":3252,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 29 04:00:59.233: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Mar 29 04:00:59.402: INFO: Waiting up to 5m0s for pod "var-expansion-65729707-9150-4181-8039-73e723d49877" in namespace "var-expansion-2440" to be "Succeeded or Failed"
Mar 29 04:00:59.433: INFO: Pod "var-expansion-65729707-9150-4181-8039-73e723d49877": Phase="Pending", Reason="", readiness=false. Elapsed: 31.055552ms
Mar 29 04:01:01.464: INFO: Pod "var-expansion-65729707-9150-4181-8039-73e723d49877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061791148s
STEP: Saw pod success
Mar 29 04:01:01.464: INFO: Pod "var-expansion-65729707-9150-4181-8039-73e723d49877" satisfied condition "Succeeded or Failed"
Mar 29 04:01:01.495: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod var-expansion-65729707-9150-4181-8039-73e723d49877 container dapi-container: <nil>
STEP: delete the pod
Mar 29 04:01:01.588: INFO: Waiting for pod var-expansion-65729707-9150-4181-8039-73e723d49877 to disappear
Mar 29 04:01:01.619: INFO: Pod var-expansion-65729707-9150-4181-8039-73e723d49877 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 04:01:01.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2440" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":283,"completed":199,"skipped":3265,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 04:01:01.721: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 29 04:01:01.866: INFO: PodSpec: initContainers in spec.initContainers
Mar 29 04:01:43.534: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9dac72b3-5c3a-48e8-adb4-993c85f018e2", GenerateName:"", Namespace:"init-container-8082", SelfLink:"/api/v1/namespaces/init-container-8082/pods/pod-init-9dac72b3-5c3a-48e8-adb4-993c85f018e2", UID:"a6de4cc0-ac5e-4324-a0df-269e9f984562", ResourceVersion:"21230", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721051261, loc:(*time.Location)(0x7b56f20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"866680008"}, Annotations:map[string]string{"cni.projectcalico.org/podIP":"192.168.234.41/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-7qdkv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002322a80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qdkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qdkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-7qdkv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ccbd18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000532770), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ccbdc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ccbde0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002ccbde8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002ccbdec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051261, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051261, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051261, loc:(*time.Location)(0x7b56f20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051261, loc:(*time.Location)(0x7b56f20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.150.0.6", PodIP:"192.168.234.41", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.234.41"}}, StartTime:(*v1.Time)(0xc001c360e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0005328c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000532930)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://22d622d3926705342c01ae63e29323b5e7b2b3152836e4f605bea759a8b1a446", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c362a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001c36120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002ccbe6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 29 04:01:43.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8082" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":283,"completed":200,"skipped":3291,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 29 04:01:43.764: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:01:43.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8490" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":283,"completed":201,"skipped":3293,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 29 04:01:49.516: INFO: stderr: ""
Mar 29 04:01:49.516: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9412-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:01:52.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6714" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":283,"completed":202,"skipped":3341,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Mar 29 04:01:54.006: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 29 04:01:54.006: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 29 04:01:54.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1091" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":283,"completed":203,"skipped":3353,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 29 04:01:54.559: INFO: stderr: ""
Mar 29 04:01:54.559: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 04:01:54.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4161" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":283,"completed":204,"skipped":3366,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 04:01:54.666: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 29 04:01:54.845: INFO: Waiting up to 5m0s for pod "pod-633b9d95-1668-4b40-a05f-5561633e5dbc" in namespace "emptydir-7965" to be "Succeeded or Failed"
Mar 29 04:01:54.877: INFO: Pod "pod-633b9d95-1668-4b40-a05f-5561633e5dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 31.580708ms
Mar 29 04:01:56.907: INFO: Pod "pod-633b9d95-1668-4b40-a05f-5561633e5dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062227347s
Mar 29 04:01:58.938: INFO: Pod "pod-633b9d95-1668-4b40-a05f-5561633e5dbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092746282s
STEP: Saw pod success
Mar 29 04:01:58.938: INFO: Pod "pod-633b9d95-1668-4b40-a05f-5561633e5dbc" satisfied condition "Succeeded or Failed"
Mar 29 04:01:58.968: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-633b9d95-1668-4b40-a05f-5561633e5dbc container test-container: <nil>
STEP: delete the pod
Mar 29 04:01:59.054: INFO: Waiting for pod pod-633b9d95-1668-4b40-a05f-5561633e5dbc to disappear
Mar 29 04:01:59.084: INFO: Pod pod-633b9d95-1668-4b40-a05f-5561633e5dbc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 04:01:59.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7965" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":205,"skipped":3377,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 04:01:59.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a" in namespace "downward-api-1631" to be "Succeeded or Failed"
Mar 29 04:01:59.422: INFO: Pod "downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.911592ms
Mar 29 04:02:01.453: INFO: Pod "downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066042309s
STEP: Saw pod success
Mar 29 04:02:01.453: INFO: Pod "downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a" satisfied condition "Succeeded or Failed"
Mar 29 04:02:01.484: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a container client-container: <nil>
STEP: delete the pod
Mar 29 04:02:01.570: INFO: Waiting for pod downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a to disappear
Mar 29 04:02:01.601: INFO: Pod downwardapi-volume-600d2550-17ba-4c92-acd6-ace5afe21e2a no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 04:02:01.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1631" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":206,"skipped":3445,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-dac97542-0fea-4ffe-8f24-d9900e0d209a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 04:02:06.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9493" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":207,"skipped":3449,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 29 04:02:15.042: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 29 04:02:15.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4386" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":283,"completed":208,"skipped":3467,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 29 04:02:15.172: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Mar 29 04:02:15.355: INFO: Waiting up to 5m0s for pod "var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316" in namespace "var-expansion-856" to be "Succeeded or Failed"
Mar 29 04:02:15.389: INFO: Pod "var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316": Phase="Pending", Reason="", readiness=false. Elapsed: 34.519649ms
Mar 29 04:02:17.419: INFO: Pod "var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064212576s
STEP: Saw pod success
Mar 29 04:02:17.419: INFO: Pod "var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316" satisfied condition "Succeeded or Failed"
Mar 29 04:02:17.449: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316 container dapi-container: <nil>
STEP: delete the pod
Mar 29 04:02:17.532: INFO: Waiting for pod var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316 to disappear
Mar 29 04:02:17.563: INFO: Pod var-expansion-2496b91a-8bb1-4b89-b79e-92cf8ce9b316 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 04:02:17.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-856" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":283,"completed":209,"skipped":3525,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:02:32.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6060" for this suite.
STEP: Destroying namespace "webhook-6060-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":283,"completed":210,"skipped":3556,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 29 04:02:34.974: INFO: Trying to dial the pod
Mar 29 04:02:40.070: INFO: Controller my-hostname-basic-2c269464-bf7d-4284-ae7e-c036a6565c83: Got expected result from replica 1 [my-hostname-basic-2c269464-bf7d-4284-ae7e-c036a6565c83-f4q79]: "my-hostname-basic-2c269464-bf7d-4284-ae7e-c036a6565c83-f4q79", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 29 04:02:40.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4221" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":211,"skipped":3567,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 34 lines ...
Mar 29 04:04:54.205: INFO: Deleting pod "var-expansion-b1ca6554-7f1f-49c3-b2c4-2f45b14ee46a" in namespace "var-expansion-5018"
Mar 29 04:04:54.242: INFO: Wait up to 5m0s for pod "var-expansion-b1ca6554-7f1f-49c3-b2c4-2f45b14ee46a" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 04:05:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5018" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":283,"completed":212,"skipped":3573,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:05:45.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7471" for this suite.
STEP: Destroying namespace "webhook-7471-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":283,"completed":213,"skipped":3578,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 31.428379ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 29 04:05:46.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7121" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":283,"completed":214,"skipped":3586,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Mar 29 04:05:48.877: INFO: Pod pod-hostip-9609679e-d2ee-47f2-bb07-d5556ac0f773 has hostIP: 10.150.0.3
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 04:05:48.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7109" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":283,"completed":215,"skipped":3597,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Mar 29 04:05:49.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4518" for this suite.
STEP: Destroying namespace "nspatchtest-99c1f1d9-efe0-4e0e-822f-a124cf75b81a-570" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":283,"completed":216,"skipped":3613,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 29 04:05:49.510: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 04:05:49.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7314" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":283,"completed":217,"skipped":3645,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Mar 29 04:05:56.834: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5339-crds.spec'
Mar 29 04:05:57.119: INFO: stderr: ""
Mar 29 04:05:57.119: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5339-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 29 04:05:57.119: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5339-crds.spec.bars'
Mar 29 04:05:57.411: INFO: stderr: ""
Mar 29 04:05:57.411: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5339-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 29 04:05:57.411: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5339-crds.spec.bars2'
Mar 29 04:05:57.789: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:06:01.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3966" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":283,"completed":218,"skipped":3739,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 04:06:01.362: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7b25e461-3e04-4aed-ac04-9872b461e44f" in namespace "security-context-test-341" to be "Succeeded or Failed"
Mar 29 04:06:01.394: INFO: Pod "busybox-user-65534-7b25e461-3e04-4aed-ac04-9872b461e44f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.666128ms
Mar 29 04:06:03.424: INFO: Pod "busybox-user-65534-7b25e461-3e04-4aed-ac04-9872b461e44f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06197364s
Mar 29 04:06:03.424: INFO: Pod "busybox-user-65534-7b25e461-3e04-4aed-ac04-9872b461e44f" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 29 04:06:03.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-341" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":219,"skipped":3743,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-5106/configmap-test-f822927f-4df6-4603-952e-6dcee029685a
STEP: Creating a pod to test consume configMaps
Mar 29 04:06:03.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d" in namespace "configmap-5106" to be "Succeeded or Failed"
Mar 29 04:06:03.752: INFO: Pod "pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.403145ms
Mar 29 04:06:05.784: INFO: Pod "pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066216187s
STEP: Saw pod success
Mar 29 04:06:05.784: INFO: Pod "pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d" satisfied condition "Succeeded or Failed"
Mar 29 04:06:05.814: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d container env-test: <nil>
STEP: delete the pod
Mar 29 04:06:05.894: INFO: Waiting for pod pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d to disappear
Mar 29 04:06:05.923: INFO: Pod pod-configmaps-5cba3e93-bc8c-4597-9f1c-dcdd76a4587d no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 04:06:05.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5106" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":220,"skipped":3747,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 21 lines ...
Mar 29 04:06:18.516: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 29 04:06:18.546: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 29 04:06:18.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1050" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":283,"completed":221,"skipped":3747,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 29 04:06:18.770: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy --unix-socket=/tmp/kubectl-proxy-unix183769803/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 04:06:18.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1047" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":283,"completed":222,"skipped":3767,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 04:06:18.906: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 29 04:06:19.073: INFO: Waiting up to 5m0s for pod "pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4" in namespace "emptydir-7168" to be "Succeeded or Failed"
Mar 29 04:06:19.104: INFO: Pod "pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.294736ms
Mar 29 04:06:21.134: INFO: Pod "pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060283099s
STEP: Saw pod success
Mar 29 04:06:21.134: INFO: Pod "pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4" satisfied condition "Succeeded or Failed"
Mar 29 04:06:21.163: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4 container test-container: <nil>
STEP: delete the pod
Mar 29 04:06:21.252: INFO: Waiting for pod pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4 to disappear
Mar 29 04:06:21.283: INFO: Pod pod-7cd5a473-3113-4c16-9016-1c2d132ed1c4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 04:06:21.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7168" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":223,"skipped":3777,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 04:06:21.374: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 6 lines ...
STEP: Wait for the deployment to be ready
Mar 29 04:06:22.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 29 04:06:24.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721051582, loc:(*time.Location)(0x7b56f20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 29 04:06:27.759: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:06:27.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7048" for this suite.
STEP: Destroying namespace "webhook-7048-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":283,"completed":224,"skipped":3794,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-3cb8f5d3-8fb2-43d4-af73-3d40e9c41f2f
STEP: Creating a pod to test consume configMaps
Mar 29 04:06:28.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f" in namespace "configmap-4149" to be "Succeeded or Failed"
Mar 29 04:06:28.522: INFO: Pod "pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.246386ms
Mar 29 04:06:30.553: INFO: Pod "pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061828209s
STEP: Saw pod success
Mar 29 04:06:30.553: INFO: Pod "pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f" satisfied condition "Succeeded or Failed"
Mar 29 04:06:30.582: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 04:06:30.669: INFO: Waiting for pod pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f to disappear
Mar 29 04:06:30.698: INFO: Pod pod-configmaps-c38a524e-c19d-415e-a14b-9ce76fbb462f no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 04:06:30.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4149" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":225,"skipped":3797,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 29 04:06:36.154: INFO: stderr: ""
Mar 29 04:06:36.154: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2811-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<map[string]>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:06:39.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9421" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":283,"completed":226,"skipped":3850,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 29 04:06:41.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9074" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":227,"skipped":3850,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 59 lines ...
Mar 29 04:08:55.270: INFO: Waiting for statefulset status.replicas updated to 0
Mar 29 04:08:55.300: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 29 04:08:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9122" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":283,"completed":228,"skipped":3907,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 04:08:55.487: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 04:10:55.751: INFO: Deleting pod "var-expansion-4cc4070e-bc56-43ba-9653-bd80181073af" in namespace "var-expansion-230"
Mar 29 04:10:55.787: INFO: Wait up to 5m0s for pod "var-expansion-4cc4070e-bc56-43ba-9653-bd80181073af" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 29 04:10:57.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-230" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":283,"completed":229,"skipped":3922,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 29 04:10:58.078: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:11:04.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1040" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":283,"completed":230,"skipped":3940,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-576f1410-9c76-46af-ab43-5385e39608c9
STEP: Creating a pod to test consume configMaps
Mar 29 04:11:04.843: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97" in namespace "projected-374" to be "Succeeded or Failed"
Mar 29 04:11:04.880: INFO: Pod "pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97": Phase="Pending", Reason="", readiness=false. Elapsed: 37.23989ms
Mar 29 04:11:06.910: INFO: Pod "pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067387471s
STEP: Saw pod success
Mar 29 04:11:06.910: INFO: Pod "pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97" satisfied condition "Succeeded or Failed"
Mar 29 04:11:06.940: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 04:11:07.055: INFO: Waiting for pod pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97 to disappear
Mar 29 04:11:07.084: INFO: Pod pod-projected-configmaps-721fa157-1395-4e2d-b8f9-148979711c97 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 29 04:11:07.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-374" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":231,"skipped":3940,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 29 04:11:07.313: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 04:11:10.571: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 29 04:11:23.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8204" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":283,"completed":232,"skipped":3949,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 29 04:11:26.149: INFO: Trying to dial the pod
Mar 29 04:11:31.243: INFO: Controller my-hostname-basic-9d1b1c16-d153-4cca-8044-889f51d8547d: Got expected result from replica 1 [my-hostname-basic-9d1b1c16-d153-4cca-8044-889f51d8547d-tmgjd]: "my-hostname-basic-9d1b1c16-d153-4cca-8044-889f51d8547d-tmgjd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 29 04:11:31.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2870" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":233,"skipped":3973,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 29 04:11:32.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0329 04:11:32.322068   24871 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-7983" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":283,"completed":234,"skipped":3976,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:11:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-359" for this suite.
STEP: Destroying namespace "webhook-359-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":283,"completed":235,"skipped":4017,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:11:45.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5137" for this suite.
STEP: Destroying namespace "webhook-5137-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":283,"completed":236,"skipped":4017,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 29 04:11:47.004: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 29 04:11:47.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4334" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":237,"skipped":4027,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 29 04:11:49.686: INFO: Initial restart count of pod busybox-31547c35-f357-4b78-be5d-16b27970f3fc is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 29 04:15:51.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1053" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":238,"skipped":4039,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 04:16:08.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9372" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":283,"completed":239,"skipped":4050,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-cf9f87a0-099e-43bb-89a6-e503d60936e6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 04:16:14.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9330" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":240,"skipped":4071,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 29 04:16:17.117: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 29 04:16:17.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9668" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":283,"completed":241,"skipped":4080,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 29 04:16:20.217: INFO: Successfully updated pod "annotationupdatebc7b47b9-0aca-470c-aee9-33df7f596a66"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 04:16:22.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7988" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":242,"skipped":4095,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Mar 29 04:16:26.477: INFO: stderr: ""
Mar 29 04:16:26.477: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 04:16:26.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4142" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":283,"completed":243,"skipped":4106,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 04:16:26.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931" in namespace "downward-api-4097" to be "Succeeded or Failed"
Mar 29 04:16:26.787: INFO: Pod "downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931": Phase="Pending", Reason="", readiness=false. Elapsed: 29.988736ms
Mar 29 04:16:28.817: INFO: Pod "downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060265985s
STEP: Saw pod success
Mar 29 04:16:28.817: INFO: Pod "downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931" satisfied condition "Succeeded or Failed"
Mar 29 04:16:28.847: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931 container client-container: <nil>
STEP: delete the pod
Mar 29 04:16:28.928: INFO: Waiting for pod downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931 to disappear
Mar 29 04:16:28.957: INFO: Pod downwardapi-volume-065b9199-9ce4-4314-bc4e-ad42a42cf931 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 04:16:28.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4097" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":244,"skipped":4108,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-whxx
STEP: Creating a pod to test atomic-volume-subpath
Mar 29 04:16:29.290: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-whxx" in namespace "subpath-7019" to be "Succeeded or Failed"
Mar 29 04:16:29.324: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Pending", Reason="", readiness=false. Elapsed: 34.082224ms
Mar 29 04:16:31.355: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 2.064857235s
Mar 29 04:16:33.385: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 4.095111058s
Mar 29 04:16:35.415: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 6.125154963s
Mar 29 04:16:37.446: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 8.1556628s
Mar 29 04:16:39.476: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 10.186035457s
Mar 29 04:16:41.506: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 12.216356023s
Mar 29 04:16:43.536: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 14.246252176s
Mar 29 04:16:45.568: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 16.277670647s
Mar 29 04:16:47.598: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 18.308253087s
Mar 29 04:16:49.629: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Running", Reason="", readiness=true. Elapsed: 20.3387217s
Mar 29 04:16:51.659: INFO: Pod "pod-subpath-test-secret-whxx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.369323115s
STEP: Saw pod success
Mar 29 04:16:51.659: INFO: Pod "pod-subpath-test-secret-whxx" satisfied condition "Succeeded or Failed"
Mar 29 04:16:51.689: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-subpath-test-secret-whxx container test-container-subpath-secret-whxx: <nil>
STEP: delete the pod
Mar 29 04:16:51.769: INFO: Waiting for pod pod-subpath-test-secret-whxx to disappear
Mar 29 04:16:51.800: INFO: Pod pod-subpath-test-secret-whxx no longer exists
STEP: Deleting pod pod-subpath-test-secret-whxx
Mar 29 04:16:51.800: INFO: Deleting pod "pod-subpath-test-secret-whxx" in namespace "subpath-7019"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 29 04:16:51.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7019" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":283,"completed":245,"skipped":4108,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 29 04:16:51.923: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Mar 29 04:16:52.111: INFO: Waiting up to 5m0s for pod "client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7" in namespace "containers-6889" to be "Succeeded or Failed"
Mar 29 04:16:52.142: INFO: Pod "client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.772329ms
Mar 29 04:16:54.172: INFO: Pod "client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06076351s
STEP: Saw pod success
Mar 29 04:16:54.172: INFO: Pod "client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7" satisfied condition "Succeeded or Failed"
Mar 29 04:16:54.205: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7 container test-container: <nil>
STEP: delete the pod
Mar 29 04:16:54.289: INFO: Waiting for pod client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7 to disappear
Mar 29 04:16:54.319: INFO: Pod client-containers-ec1ddbb1-0e95-4e34-8c30-a217f7f56cc7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 29 04:16:54.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6889" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":283,"completed":246,"skipped":4153,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:16:59.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4988" for this suite.
STEP: Destroying namespace "webhook-4988-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":283,"completed":247,"skipped":4165,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 29 04:16:59.626: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b617f80e-745a-404b-9db6-bc77b903a785" in namespace "security-context-test-1840" to be "Succeeded or Failed"
Mar 29 04:16:59.659: INFO: Pod "busybox-privileged-false-b617f80e-745a-404b-9db6-bc77b903a785": Phase="Pending", Reason="", readiness=false. Elapsed: 32.470438ms
Mar 29 04:17:01.690: INFO: Pod "busybox-privileged-false-b617f80e-745a-404b-9db6-bc77b903a785": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063068735s
Mar 29 04:17:01.690: INFO: Pod "busybox-privileged-false-b617f80e-745a-404b-9db6-bc77b903a785" satisfied condition "Succeeded or Failed"
Mar 29 04:17:01.725: INFO: Got logs for pod "busybox-privileged-false-b617f80e-745a-404b-9db6-bc77b903a785": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 29 04:17:01.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1840" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":248,"skipped":4181,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-9232a9a7-a166-4377-9da8-4f8bc30e909f
STEP: Creating a pod to test consume secrets
Mar 29 04:17:02.059: INFO: Waiting up to 5m0s for pod "pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec" in namespace "secrets-1298" to be "Succeeded or Failed"
Mar 29 04:17:02.088: INFO: Pod "pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 29.410394ms
Mar 29 04:17:04.119: INFO: Pod "pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060612537s
STEP: Saw pod success
Mar 29 04:17:04.119: INFO: Pod "pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec" satisfied condition "Succeeded or Failed"
Mar 29 04:17:04.149: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 04:17:04.239: INFO: Waiting for pod pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec to disappear
Mar 29 04:17:04.272: INFO: Pod pod-secrets-c29e718c-0a02-4e6a-8cbe-03fa6c38f1ec no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 04:17:04.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1298" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":249,"skipped":4227,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-bcd31de9-733e-4477-9856-38f8831a1c00
STEP: Creating a pod to test consume secrets
Mar 29 04:17:04.728: INFO: Waiting up to 5m0s for pod "pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed" in namespace "secrets-6367" to be "Succeeded or Failed"
Mar 29 04:17:04.757: INFO: Pod "pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed": Phase="Pending", Reason="", readiness=false. Elapsed: 29.801397ms
Mar 29 04:17:06.787: INFO: Pod "pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059914505s
STEP: Saw pod success
Mar 29 04:17:06.788: INFO: Pod "pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed" satisfied condition "Succeeded or Failed"
Mar 29 04:17:06.819: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed container secret-volume-test: <nil>
STEP: delete the pod
Mar 29 04:17:06.903: INFO: Waiting for pod pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed to disappear
Mar 29 04:17:06.939: INFO: Pod pod-secrets-57ca1284-172b-4a60-90e7-730aa2736eed no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 04:17:06.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6367" for this suite.
STEP: Destroying namespace "secret-namespace-2473" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":283,"completed":250,"skipped":4227,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Mar 29 04:18:11.918: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 29 04:18:11.918: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 29 04:18:11.918: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 29 04:18:11.918: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:18:12.249: INFO: rc: 1
Mar 29 04:18:12.249: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 29 04:18:22.249: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:18:22.482: INFO: rc: 1
Mar 29 04:18:22.482: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:18:32.482: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:18:32.704: INFO: rc: 1
Mar 29 04:18:32.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:18:42.705: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:18:42.923: INFO: rc: 1
Mar 29 04:18:42.923: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:18:52.924: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:18:53.145: INFO: rc: 1
Mar 29 04:18:53.145: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:19:03.145: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:19:03.368: INFO: rc: 1
Mar 29 04:19:03.368: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:19:13.368: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:19:13.585: INFO: rc: 1
Mar 29 04:19:13.586: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:19:23.586: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:19:23.807: INFO: rc: 1
Mar 29 04:19:23.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:19:33.808: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:19:34.029: INFO: rc: 1
Mar 29 04:19:34.029: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:19:44.029: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:19:44.248: INFO: rc: 1
Mar 29 04:19:44.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:19:54.248: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:19:54.465: INFO: rc: 1
Mar 29 04:19:54.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:20:04.465: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:20:04.684: INFO: rc: 1
Mar 29 04:20:04.684: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:20:14.684: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:20:14.906: INFO: rc: 1
Mar 29 04:20:14.906: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:20:24.906: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:20:25.127: INFO: rc: 1
Mar 29 04:20:25.127: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:20:35.127: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:20:35.342: INFO: rc: 1
Mar 29 04:20:35.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:20:45.342: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:20:45.572: INFO: rc: 1
Mar 29 04:20:45.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:20:55.572: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:20:55.788: INFO: rc: 1
Mar 29 04:20:55.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:21:05.789: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:21:06.008: INFO: rc: 1
Mar 29 04:21:06.008: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:21:16.008: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:21:16.221: INFO: rc: 1
Mar 29 04:21:16.221: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:21:26.221: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:21:26.437: INFO: rc: 1
Mar 29 04:21:26.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:21:36.437: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:21:36.679: INFO: rc: 1
Mar 29 04:21:36.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:21:46.679: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:21:46.898: INFO: rc: 1
Mar 29 04:21:46.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:21:56.898: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:21:57.118: INFO: rc: 1
Mar 29 04:21:57.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:22:07.119: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:22:07.342: INFO: rc: 1
Mar 29 04:22:07.342: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:22:17.342: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:22:17.557: INFO: rc: 1
Mar 29 04:22:17.557: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:22:27.557: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:22:27.777: INFO: rc: 1
Mar 29 04:22:27.777: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:22:37.778: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:22:37.995: INFO: rc: 1
Mar 29 04:22:37.995: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:22:47.995: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:22:48.211: INFO: rc: 1
Mar 29 04:22:48.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:22:58.212: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:22:58.426: INFO: rc: 1
Mar 29 04:22:58.426: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:23:08.426: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:23:08.653: INFO: rc: 1
Mar 29 04:23:08.653: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 29 04:23:18.653: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-4505 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 29 04:23:18.869: INFO: rc: 1
Mar 29 04:23:18.869: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Mar 29 04:23:18.869: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":283,"completed":251,"skipped":4245,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 29 04:23:36.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6748" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":283,"completed":252,"skipped":4279,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-d1a98e1e-59ae-4092-9a3a-6bb93486f30a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 29 04:23:43.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1085" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":253,"skipped":4283,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:24:14.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9470" for this suite.
STEP: Destroying namespace "nsdeletetest-7524" for this suite.
Mar 29 04:24:14.359: INFO: Namespace nsdeletetest-7524 was already deleted
STEP: Destroying namespace "nsdeletetest-1202" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":283,"completed":254,"skipped":4296,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 162 lines ...
Mar 29 04:24:16.672: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar 29 04:24:16.672: INFO: Waiting for all frontend pods to be Running.
Mar 29 04:24:21.723: INFO: Waiting for frontend to serve content.
Mar 29 04:24:21.765: INFO: Trying to add a new entry to the guestbook.
Mar 29 04:24:21.806: INFO: Verifying that added entry can be retrieved.
Mar 29 04:24:21.844: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Mar 29 04:24:26.883: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-5589'
Mar 29 04:24:27.141: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 29 04:24:27.141: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 29 04:24:27.141: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.107.148.68:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig delete --grace-period=0 --force -f - --namespace=kubectl-5589'
... skipping 16 lines ...
Mar 29 04:24:28.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 29 04:24:28.327: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 04:24:28.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5589" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":283,"completed":255,"skipped":4301,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-98z9
STEP: Creating a pod to test atomic-volume-subpath
Mar 29 04:24:28.664: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-98z9" in namespace "subpath-3947" to be "Succeeded or Failed"
Mar 29 04:24:28.701: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Pending", Reason="", readiness=false. Elapsed: 37.255967ms
Mar 29 04:24:30.731: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 2.067564214s
Mar 29 04:24:32.761: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 4.097612305s
Mar 29 04:24:34.792: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 6.128209684s
Mar 29 04:24:36.823: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 8.158667202s
Mar 29 04:24:38.853: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 10.18919744s
Mar 29 04:24:40.883: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 12.219300359s
Mar 29 04:24:42.913: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 14.249470045s
Mar 29 04:24:44.944: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 16.280570624s
Mar 29 04:24:46.975: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 18.31098091s
Mar 29 04:24:49.005: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Running", Reason="", readiness=true. Elapsed: 20.341120646s
Mar 29 04:24:51.035: INFO: Pod "pod-subpath-test-configmap-98z9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.371008313s
STEP: Saw pod success
Mar 29 04:24:51.035: INFO: Pod "pod-subpath-test-configmap-98z9" satisfied condition "Succeeded or Failed"
Mar 29 04:24:51.065: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-subpath-test-configmap-98z9 container test-container-subpath-configmap-98z9: <nil>
STEP: delete the pod
Mar 29 04:24:51.161: INFO: Waiting for pod pod-subpath-test-configmap-98z9 to disappear
Mar 29 04:24:51.191: INFO: Pod pod-subpath-test-configmap-98z9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-98z9
Mar 29 04:24:51.191: INFO: Deleting pod "pod-subpath-test-configmap-98z9" in namespace "subpath-3947"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 29 04:24:51.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3947" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":283,"completed":256,"skipped":4313,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Mar 29 04:24:54.743: INFO: Pod "adopt-release-fx8vb": Phase="Running", Reason="", readiness=true. Elapsed: 29.124084ms
Mar 29 04:24:54.743: INFO: Pod "adopt-release-fx8vb" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 29 04:24:54.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8363" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":283,"completed":257,"skipped":4313,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-7aed38fc-3b7b-46b8-8917-a41f6b0b4d6c
STEP: Creating a pod to test consume configMaps
Mar 29 04:24:55.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6" in namespace "configmap-6538" to be "Succeeded or Failed"
Mar 29 04:24:55.070: INFO: Pod "pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.923023ms
Mar 29 04:24:57.100: INFO: Pod "pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059895499s
STEP: Saw pod success
Mar 29 04:24:57.100: INFO: Pod "pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6" satisfied condition "Succeeded or Failed"
Mar 29 04:24:57.130: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 04:24:57.223: INFO: Waiting for pod pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6 to disappear
Mar 29 04:24:57.253: INFO: Pod pod-configmaps-61ca36b5-18f8-43ea-9433-88ecb8c1acd6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 04:24:57.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6538" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":258,"skipped":4318,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 29 04:25:00.207: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5b2bdfad-1b55-4897-bbc3-fb8af8c92efc"
Mar 29 04:25:00.207: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5b2bdfad-1b55-4897-bbc3-fb8af8c92efc" in namespace "pods-7460" to be "terminated due to deadline exceeded"
Mar 29 04:25:00.236: INFO: Pod "pod-update-activedeadlineseconds-5b2bdfad-1b55-4897-bbc3-fb8af8c92efc": Phase="Running", Reason="", readiness=true. Elapsed: 29.130129ms
Mar 29 04:25:02.267: INFO: Pod "pod-update-activedeadlineseconds-5b2bdfad-1b55-4897-bbc3-fb8af8c92efc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.059616124s
Mar 29 04:25:02.267: INFO: Pod "pod-update-activedeadlineseconds-5b2bdfad-1b55-4897-bbc3-fb8af8c92efc" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 04:25:02.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7460" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":283,"completed":259,"skipped":4352,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Mar 29 04:25:07.310: INFO: stderr: ""
Mar 29 04:25:07.310: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 29 04:25:07.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6994" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":283,"completed":260,"skipped":4372,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:25:12.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5962" for this suite.
STEP: Destroying namespace "webhook-5962-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":283,"completed":261,"skipped":4381,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 29 04:25:20.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5336" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":283,"completed":262,"skipped":4382,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 29 04:25:20.830: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:135
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 29 04:25:21.193: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:21.193: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:21.193: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:21.225: INFO: Number of nodes with available pods: 0
Mar 29 04:25:21.225: INFO: Node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal is running more than one daemon pod
Mar 29 04:25:22.280: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:22.280: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:22.280: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:22.311: INFO: Number of nodes with available pods: 2
Mar 29 04:25:22.311: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 29 04:25:22.442: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:22.442: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:22.442: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:22.474: INFO: Number of nodes with available pods: 1
Mar 29 04:25:22.474: INFO: Node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal is running more than one daemon pod
Mar 29 04:25:23.531: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 3 lines ...
Mar 29 04:25:23.562: INFO: Node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal is running more than one daemon pod
Mar 29 04:25:24.530: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:24.530: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:24.530: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-e2e-gci-gce-alpha1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 29 04:25:24.562: INFO: Number of nodes with available pods: 2
Mar 29 04:25:24.562: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:101
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-257, will wait for the garbage collector to delete the pods
Mar 29 04:25:24.742: INFO: Deleting DaemonSet.extensions daemon-set took: 37.076997ms
Mar 29 04:25:25.042: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.297939ms
... skipping 4 lines ...
Mar 29 04:25:37.434: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-257/pods","resourceVersion":"27434"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 29 04:25:37.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-257" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":283,"completed":263,"skipped":4382,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-f1d78b15-6122-4c7a-b4d3-338d5e670062
STEP: Creating a pod to test consume secrets
Mar 29 04:25:37.820: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935" in namespace "projected-8687" to be "Succeeded or Failed"
Mar 29 04:25:37.853: INFO: Pod "pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935": Phase="Pending", Reason="", readiness=false. Elapsed: 32.415133ms
Mar 29 04:25:39.883: INFO: Pod "pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062949134s
STEP: Saw pod success
Mar 29 04:25:39.883: INFO: Pod "pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935" satisfied condition "Succeeded or Failed"
Mar 29 04:25:39.913: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 29 04:25:39.992: INFO: Waiting for pod pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935 to disappear
Mar 29 04:25:40.023: INFO: Pod pod-projected-secrets-bd371206-ecba-43c9-9a39-7a333c73f935 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 29 04:25:40.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8687" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":264,"skipped":4388,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 04:25:40.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304" in namespace "projected-7517" to be "Succeeded or Failed"
Mar 29 04:25:40.416: INFO: Pod "downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304": Phase="Pending", Reason="", readiness=false. Elapsed: 29.506051ms
Mar 29 04:25:42.446: INFO: Pod "downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059726628s
STEP: Saw pod success
Mar 29 04:25:42.446: INFO: Pod "downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304" satisfied condition "Succeeded or Failed"
Mar 29 04:25:42.475: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304 container client-container: <nil>
STEP: delete the pod
Mar 29 04:25:42.561: INFO: Waiting for pod downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304 to disappear
Mar 29 04:25:42.591: INFO: Pod downwardapi-volume-a67dca44-4918-4880-9899-f2d00cc8a304 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 04:25:42.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7517" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":265,"skipped":4398,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-88aacb37-5edd-41be-bd9c-89f6f32f855b
STEP: Creating a pod to test consume configMaps
Mar 29 04:25:42.894: INFO: Waiting up to 5m0s for pod "pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad" in namespace "configmap-9745" to be "Succeeded or Failed"
Mar 29 04:25:42.926: INFO: Pod "pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad": Phase="Pending", Reason="", readiness=false. Elapsed: 31.881181ms
Mar 29 04:25:44.956: INFO: Pod "pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06160708s
STEP: Saw pod success
Mar 29 04:25:44.956: INFO: Pod "pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad" satisfied condition "Succeeded or Failed"
Mar 29 04:25:44.986: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad container configmap-volume-test: <nil>
STEP: delete the pod
Mar 29 04:25:45.064: INFO: Waiting for pod pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad to disappear
Mar 29 04:25:45.094: INFO: Pod pod-configmaps-83f47737-a14a-42bc-8804-abea70a121ad no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 29 04:25:45.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9745" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":266,"skipped":4412,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 344 lines ...
Mar 29 04:25:57.672: INFO: Deleting ReplicationController proxy-service-hw5l7 took: 37.026054ms
Mar 29 04:25:57.972: INFO: Terminating ReplicationController proxy-service-hw5l7 pods took: 300.198559ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 29 04:26:07.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5785" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":283,"completed":267,"skipped":4414,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 04:26:07.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d" in namespace "projected-6583" to be "Succeeded or Failed"
Mar 29 04:26:07.563: INFO: Pod "downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.290984ms
Mar 29 04:26:09.593: INFO: Pod "downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060353727s
STEP: Saw pod success
Mar 29 04:26:09.593: INFO: Pod "downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d" satisfied condition "Succeeded or Failed"
Mar 29 04:26:09.623: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d container client-container: <nil>
STEP: delete the pod
Mar 29 04:26:09.703: INFO: Waiting for pod downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d to disappear
Mar 29 04:26:09.733: INFO: Pod downwardapi-volume-15e0bd15-bfc9-4013-af00-01361714820d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 04:26:09.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6583" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":268,"skipped":4459,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 04:26:09.993: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7" in namespace "downward-api-4072" to be "Succeeded or Failed"
Mar 29 04:26:10.024: INFO: Pod "downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.46661ms
Mar 29 04:26:12.055: INFO: Pod "downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061814118s
STEP: Saw pod success
Mar 29 04:26:12.055: INFO: Pod "downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7" satisfied condition "Succeeded or Failed"
Mar 29 04:26:12.084: INFO: Trying to get logs from node test1-md-0-625l2.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7 container client-container: <nil>
STEP: delete the pod
Mar 29 04:26:12.166: INFO: Waiting for pod downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7 to disappear
Mar 29 04:26:12.196: INFO: Pod downwardapi-volume-27ca8d5e-40a8-4f14-806d-c9987b3b4bf7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 29 04:26:12.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4072" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":269,"skipped":4482,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Mar 29 04:26:14.647: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 04:26:14.896: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 04:26:14.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1843" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":283,"completed":270,"skipped":4487,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 16 lines ...
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 04:26:27.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3373" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":283,"completed":271,"skipped":4493,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 29 04:26:28.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6657" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":283,"completed":272,"skipped":4504,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 29 04:26:32.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2007" for this suite.
STEP: Destroying namespace "webhook-2007-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":283,"completed":273,"skipped":4508,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 29 04:26:32.994: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 29 04:26:33.166: INFO: Waiting up to 5m0s for pod "pod-f6d7f847-feba-4639-a80f-adc30af01cc0" in namespace "emptydir-3236" to be "Succeeded or Failed"
Mar 29 04:26:33.202: INFO: Pod "pod-f6d7f847-feba-4639-a80f-adc30af01cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.632655ms
Mar 29 04:26:35.232: INFO: Pod "pod-f6d7f847-feba-4639-a80f-adc30af01cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065816933s
STEP: Saw pod success
Mar 29 04:26:35.232: INFO: Pod "pod-f6d7f847-feba-4639-a80f-adc30af01cc0" satisfied condition "Succeeded or Failed"
Mar 29 04:26:35.262: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod pod-f6d7f847-feba-4639-a80f-adc30af01cc0 container test-container: <nil>
STEP: delete the pod
Mar 29 04:26:35.345: INFO: Waiting for pod pod-f6d7f847-feba-4639-a80f-adc30af01cc0 to disappear
Mar 29 04:26:35.376: INFO: Pod pod-f6d7f847-feba-4639-a80f-adc30af01cc0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 29 04:26:35.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3236" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":274,"skipped":4519,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 29 04:26:58.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9373" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":283,"completed":275,"skipped":4522,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 29 04:27:21.135: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 29 04:27:21.392: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 29 04:27:21.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5493" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":276,"skipped":4585,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 29 04:27:21.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9" in namespace "projected-2463" to be "Succeeded or Failed"
Mar 29 04:27:21.686: INFO: Pod "downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.085376ms
Mar 29 04:27:23.716: INFO: Pod "downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061800911s
STEP: Saw pod success
Mar 29 04:27:23.716: INFO: Pod "downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9" satisfied condition "Succeeded or Failed"
Mar 29 04:27:23.745: INFO: Trying to get logs from node test1-md-0-ncshz.c.k8s-e2e-gci-gce-alpha1-5.internal pod downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9 container client-container: <nil>
STEP: delete the pod
Mar 29 04:27:23.825: INFO: Waiting for pod downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9 to disappear
Mar 29 04:27:23.856: INFO: Pod downwardapi-volume-546e0db4-a260-4cca-b833-a5d0b684a3f9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 29 04:27:23.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2463" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":283,"completed":277,"skipped":4610,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Mar 29 04:27:26.868: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 29 04:27:26.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5403" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":283,"completed":278,"skipped":4628,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 25 lines ...
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7873c88e-0041-4137-b30b-53a6db997e84 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-29T04:30:36Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-29T04:30:51Z"}