This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-03-31 17:53
Elapsed2h0m
Revisionrelease-0.2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/0884cdfa-e8be-4347-bdaf-a92fb7bc86d1/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/0884cdfa-e8be-4347-bdaf-a92fb7bc86d1/targets/test

No Test Failures!


Error lines from build-log.txt

... skipping 125 lines ...
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 8fec7c7f-5d4d-49c9-9ead-9646eb6da654
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_go/releases/download/v0.22.2/rules_go-v0.22.2.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/kubernetes/repo-infra/archive/v0.0.3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
    currently loading: vendor/github.com/onsi/ginkgo/ginkgo ... (3 packages)
Analyzing: 3 targets (3 packages loaded, 0 targets configured)
Analyzing: 3 targets (16 packages loaded, 9 targets configured)
... skipping 1698 lines ...
    ubuntu-1804:
    ubuntu-1804: TASK [sysprep : Truncate shell history] ****************************************
    ubuntu-1804: ok: [default] => (item={u'path': u'/root/.bash_history'})
    ubuntu-1804: ok: [default] => (item={u'path': u'/home/ubuntu/.bash_history'})
    ubuntu-1804:
    ubuntu-1804: PLAY RECAP *********************************************************************
    ubuntu-1804: default                    : ok=60   changed=46   unreachable=0    failed=0    skipped=72   rescued=0    ignored=0
    ubuntu-1804:
==> ubuntu-1804: Deleting instance...
    ubuntu-1804: Instance has been deleted!
==> ubuntu-1804: Creating image...
==> ubuntu-1804: Deleting disk...
    ubuntu-1804: Disk has been deleted!
... skipping 431 lines ...
node/test1-controlplane-2.c.k8s-gce-serial-1-5.internal condition met
node/test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal condition met
node/test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal condition met
Conformance test: not doing test setup.
I0331 18:25:43.862008   25441 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0331 18:25:43.863285   25441 e2e.go:124] Starting e2e run "e7deb3a1-d251-414c-9664-1837e5efceae" on Ginkgo node 1
{"msg":"Test Suite starting","total":283,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1585679142 - Will randomize all specs
Will run 283 of 4994 specs

Mar 31 18:25:43.886: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
... skipping 46 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 18:26:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6035" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":283,"completed":1,"skipped":3,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 12 lines ...
Mar 31 18:26:37.834: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 18:26:38.085: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 18:26:38.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-96" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":283,"completed":2,"skipped":5,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 20 lines ...
Mar 31 18:26:40.341: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 31 18:26:40.341: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe pod agnhost-master-qxhm4 --namespace=kubectl-3423'
Mar 31 18:26:40.643: INFO: stderr: ""
Mar 31 18:26:40.643: INFO: stdout: "Name:         agnhost-master-qxhm4\nNamespace:    kubectl-3423\nPriority:     0\nNode:         test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal/10.150.0.5\nStart Time:   Tue, 31 Mar 2020 18:26:38 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  cni.projectcalico.org/podIP: 192.168.234.194/32\nStatus:       Running\nIP:           192.168.234.194\nIPs:\n  IP:           192.168.234.194\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://673b40810423ab2956ff9d1721c0670d7dbe90c61eb9072be37b445ccacca5cb\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 31 Mar 2020 18:26:39 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2q59m (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-2q59m:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-2q59m\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                                     Message\n  ----    ------     ----       ----                                                     -------\n  Normal  Scheduled  <unknown>  default-scheduler                                        Successfully assigned kubectl-3423/agnhost-master-qxhm4 to test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal\n  Normal  Pulled     1s         kubelet, test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    1s         kubelet, test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal  Created container agnhost-master\n  Normal  Started    1s         kubelet, test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal  Started container agnhost-master\n"
Mar 31 18:26:40.643: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe rc agnhost-master --namespace=kubectl-3423'
Mar 31 18:26:40.971: INFO: stderr: ""
Mar 31 18:26:40.971: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3423\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-master-qxhm4\n"
Mar 31 18:26:40.972: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe service agnhost-master --namespace=kubectl-3423'
Mar 31 18:26:41.295: INFO: stderr: ""
Mar 31 18:26:41.295: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3423\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.105.190.195\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.234.194:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 31 18:26:41.367: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe node test1-controlplane-0.c.k8s-gce-serial-1-5.internal'
Mar 31 18:26:41.756: INFO: stderr: ""
Mar 31 18:26:41.756: INFO: stdout: "Name:               test1-controlplane-0.c.k8s-gce-serial-1-5.internal\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-2\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=us-east4\n                    failure-domain.beta.kubernetes.io/zone=us-east4-a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=test1-controlplane-0.c.k8s-gce-serial-1-5.internal\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    projectcalico.org/IPv4Address: 10.150.0.2/32\n                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.50.64\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 31 Mar 2020 18:20:16 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  test1-controlplane-0.c.k8s-gce-serial-1-5.internal\n  AcquireTime:     <unset>\n  RenewTime:       Tue, 31 Mar 2020 18:26:34 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Tue, 31 Mar 2020 18:21:19 +0000   Tue, 31 Mar 2020 18:21:19 +0000   CalicoIsUp                   Calico is running on this node\n  MemoryPressure       False   Tue, 31 Mar 2020 18:25:57 +0000   Tue, 31 Mar 2020 18:20:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 31 Mar 2020 18:25:57 +0000   Tue, 31 Mar 2020 18:20:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 31 Mar 2020 18:25:57 +0000   Tue, 31 Mar 2020 18:20:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 31 Mar 2020 18:25:57 +0000   Tue, 31 Mar 2020 18:20:46 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.150.0.2\n  ExternalIP:   \n  InternalDNS:  test1-controlplane-0.c.k8s-gce-serial-1-5.internal\n  Hostname:     test1-controlplane-0.c.k8s-gce-serial-1-5.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          30308240Ki\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7648908Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        2\n  ephemeral-storage:          27932073938\n  hugepages-1Gi:              0\n  hugepages-2Mi:              0\n  memory:                     7546508Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 1b6dbc861bb47feaa4c32b7abe3c1d18\n  System UUID:                1b6dbc86-1bb4-7fea-a4c3-2b7abe3c1d18\n  Boot ID:                    3e329d72-b976-4e17-94e5-6c75f37d7dbd\n  Kernel Version:             5.0.0-1033-gcp\n  OS Image:                   Ubuntu 18.04.4 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3\n  Kubelet Version:            v1.16.2\n  Kube-Proxy Version:         v1.16.2\nProviderID:                   gce://k8s-gce-serial-1-5/us-east4-a/test1-controlplane-0\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 calico-kube-controllers-564b6667d7-llhbs                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s\n  kube-system                 calico-node-srrjn                                                             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s\n  kube-system                 coredns-5644d7b6d9-g8dzd                                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m45s\n  kube-system                 coredns-5644d7b6d9-xxcbm                                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m45s\n  kube-system                 etcd-test1-controlplane-0.c.k8s-gce-serial-1-5.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s\n  kube-system                 kube-apiserver-test1-controlplane-0.c.k8s-gce-serial-1-5.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m50s\n  kube-system                 kube-controller-manager-test1-controlplane-0.c.k8s-gce-serial-1-5.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m50s\n  kube-system                 kube-proxy-8bbqh                                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s\n  kube-system                 kube-scheduler-test1-controlplane-0.c.k8s-gce-serial-1-5.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m42s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests    Limits\n  --------                   --------    ------\n  cpu                        1 (50%)     0 (0%)\n  memory                     140Mi (1%)  340Mi (4%)\n  ephemeral-storage          0 (0%)      0 (0%)\n  hugepages-1Gi              0 (0%)      0 (0%)\n  hugepages-2Mi              0 (0%)      0 (0%)\n  attachable-volumes-gce-pd  0           0\nEvents:\n  Type     Reason                   Age                    From                                                            Message\n  ----     ------                   ----                   ----                                                            -------\n  Normal   Starting                 8m22s                  kubelet, test1-controlplane-0.c.k8s-gce-serial-1-5.internal     Starting kubelet.\n  Warning  InvalidDiskCapacity      8m22s                  kubelet, test1-controlplane-0.c.k8s-gce-serial-1-5.internal     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  8m22s (x7 over 8m22s)  kubelet, test1-controlplane-0.c.k8s-gce-serial-1-5.internal     Node test1-controlplane-0.c.k8s-gce-serial-1-5.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet, test1-controlplane-0.c.k8s-gce-serial-1-5.internal     Node test1-controlplane-0.c.k8s-gce-serial-1-5.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     8m22s (x7 over 8m22s)  kubelet, test1-controlplane-0.c.k8s-gce-serial-1-5.internal     Node test1-controlplane-0.c.k8s-gce-serial-1-5.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  8m22s                  kubelet, test1-controlplane-0.c.k8s-gce-serial-1-5.internal     Updated Node Allocatable limit across pods\n  Normal   Starting                 5m43s                  kube-proxy, test1-controlplane-0.c.k8s-gce-serial-1-5.internal  Starting kube-proxy.\n"
Mar 31 18:26:41.756: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig describe namespace kubectl-3423'
Mar 31 18:26:42.073: INFO: stderr: ""
Mar 31 18:26:42.073: INFO: stdout: "Name:         kubectl-3423\nLabels:       e2e-framework=kubectl\n              e2e-run=e7deb3a1-d251-414c-9664-1837e5efceae\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 18:26:42.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3423" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":283,"completed":3,"skipped":51,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 31 18:26:42.166: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Mar 31 18:26:42.341: INFO: Waiting up to 5m0s for pod "var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56" in namespace "var-expansion-4869" to be "Succeeded or Failed"
Mar 31 18:26:42.373: INFO: Pod "var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56": Phase="Pending", Reason="", readiness=false. Elapsed: 31.786571ms
Mar 31 18:26:44.404: INFO: Pod "var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062712179s
STEP: Saw pod success
Mar 31 18:26:44.404: INFO: Pod "var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56" satisfied condition "Succeeded or Failed"
Mar 31 18:26:44.434: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56 container dapi-container: <nil>
STEP: delete the pod
Mar 31 18:26:44.524: INFO: Waiting for pod var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56 to disappear
Mar 31 18:26:44.554: INFO: Pod var-expansion-e1efc23a-ec18-4f0b-b2a6-92baf3d5ea56 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 18:26:44.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4869" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":283,"completed":4,"skipped":86,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 18:26:44.646: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Mar 31 18:26:44.831: INFO: Waiting up to 5m0s for pod "pod-cff0ec39-becd-4bba-aa8f-d7e650354f07" in namespace "emptydir-8364" to be "Succeeded or Failed"
Mar 31 18:26:44.867: INFO: Pod "pod-cff0ec39-becd-4bba-aa8f-d7e650354f07": Phase="Pending", Reason="", readiness=false. Elapsed: 36.064511ms
Mar 31 18:26:46.899: INFO: Pod "pod-cff0ec39-becd-4bba-aa8f-d7e650354f07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067367954s
STEP: Saw pod success
Mar 31 18:26:46.899: INFO: Pod "pod-cff0ec39-becd-4bba-aa8f-d7e650354f07" satisfied condition "Succeeded or Failed"
Mar 31 18:26:46.929: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-cff0ec39-becd-4bba-aa8f-d7e650354f07 container test-container: <nil>
STEP: delete the pod
Mar 31 18:26:47.007: INFO: Waiting for pod pod-cff0ec39-becd-4bba-aa8f-d7e650354f07 to disappear
Mar 31 18:26:47.037: INFO: Pod pod-cff0ec39-becd-4bba-aa8f-d7e650354f07 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 18:26:47.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8364" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":5,"skipped":116,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 31 18:26:54.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6063" for this suite.
STEP: Destroying namespace "webhook-6063-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":283,"completed":6,"skipped":126,"failed":0}
SS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 19 lines ...
Mar 31 18:28:35.130: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
  test/e2e/framework/framework.go:175
Mar 31 18:28:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-multiple-pods-2028" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":283,"completed":7,"skipped":128,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 31 18:28:41.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8995" for this suite.
STEP: Destroying namespace "webhook-8995-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":283,"completed":8,"skipped":137,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 31 18:28:46.167: INFO: Successfully updated pod "labelsupdate11540ea9-3cbd-4e56-b57d-449305c03292"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 18:28:48.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8321" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":9,"skipped":142,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 20 lines ...
Mar 31 18:28:56.917: INFO: Pod "test-cleanup-deployment-577c77b589-9554s" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-9554s test-cleanup-deployment-577c77b589- deployment-9045 /api/v1/namespaces/deployment-9045/pods/test-cleanup-deployment-577c77b589-9554s e2485d7f-a60b-41e2-8f3c-b66ef8e13e98 2213 0 2020-03-31 18:28:54 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:577c77b589] map[cni.projectcalico.org/podIP:192.168.234.199/32] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 b439c63d-e332-45ec-bf13-0a6d3cfb27ae 0xc001440767 0xc001440768}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-24wvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-24wvl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-24wvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:28:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:28:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:28:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:28:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.234.199,StartTime:2020-03-31 18:28:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-31 18:28:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://e02c6cf4e48170daccb8fe9d90c6de62b2c07a3bd0f0ddcf2b7a16759547b82a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.234.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 31 18:28:56.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9045" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":283,"completed":10,"skipped":154,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 31 18:28:57.009: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 31 18:28:57.180: INFO: Waiting up to 5m0s for pod "downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7" in namespace "downward-api-1010" to be "Succeeded or Failed"
Mar 31 18:28:57.213: INFO: Pod "downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7": Phase="Pending", Reason="", readiness=false. Elapsed: 33.02901ms
Mar 31 18:28:59.243: INFO: Pod "downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7": Phase="Running", Reason="", readiness=true. Elapsed: 2.063455983s
Mar 31 18:29:01.274: INFO: Pod "downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093779353s
STEP: Saw pod success
Mar 31 18:29:01.274: INFO: Pod "downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7" satisfied condition "Succeeded or Failed"
Mar 31 18:29:01.304: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7 container dapi-container: <nil>
STEP: delete the pod
Mar 31 18:29:01.396: INFO: Waiting for pod downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7 to disappear
Mar 31 18:29:01.427: INFO: Pod downward-api-e166b38a-3074-45b4-a2d8-5aa59b5180f7 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 31 18:29:01.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1010" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":283,"completed":11,"skipped":168,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 18:29:12.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1519" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":283,"completed":12,"skipped":170,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 18:29:13.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e" in namespace "projected-2925" to be "Succeeded or Failed"
Mar 31 18:29:13.239: INFO: Pod "downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.241659ms
Mar 31 18:29:15.269: INFO: Pod "downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062949936s
STEP: Saw pod success
Mar 31 18:29:15.269: INFO: Pod "downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e" satisfied condition "Succeeded or Failed"
Mar 31 18:29:15.299: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e container client-container: <nil>
STEP: delete the pod
Mar 31 18:29:15.377: INFO: Waiting for pod downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e to disappear
Mar 31 18:29:15.407: INFO: Pod downwardapi-volume-b69ac008-33eb-4fcc-b798-284085ecae0e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 18:29:15.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2925" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":13,"skipped":212,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 339 lines ...
Mar 31 18:29:22.986: INFO: Deleting ReplicationController proxy-service-m78f2 took: 36.053218ms
Mar 31 18:29:23.287: INFO: Terminating ReplicationController proxy-service-m78f2 pods took: 300.319847ms
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 31 18:29:35.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2358" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":283,"completed":14,"skipped":215,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:175
Mar 31 18:29:39.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7259" for this suite.
STEP: Destroying namespace "webhook-7259-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":283,"completed":15,"skipped":232,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-704b9f24-08fd-4abd-bbbc-15f96ddf4510
STEP: Creating a pod to test consume secrets
Mar 31 18:29:39.606: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e" in namespace "projected-2492" to be "Succeeded or Failed"
Mar 31 18:29:39.638: INFO: Pod "pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.993896ms
Mar 31 18:29:41.669: INFO: Pod "pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062202821s
STEP: Saw pod success
Mar 31 18:29:41.669: INFO: Pod "pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e" satisfied condition "Succeeded or Failed"
Mar 31 18:29:41.699: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 31 18:29:41.785: INFO: Waiting for pod pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e to disappear
Mar 31 18:29:41.815: INFO: Pod pod-projected-secrets-b9f9f495-00cf-4830-9144-d44e1642083e no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 31 18:29:41.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2492" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":16,"skipped":239,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 31 18:29:41.908: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Mar 31 18:29:42.077: INFO: Waiting up to 5m0s for pod "client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c" in namespace "containers-2246" to be "Succeeded or Failed"
Mar 31 18:29:42.109: INFO: Pod "client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.437062ms
Mar 31 18:29:44.139: INFO: Pod "client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061597687s
STEP: Saw pod success
Mar 31 18:29:44.139: INFO: Pod "client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c" satisfied condition "Succeeded or Failed"
Mar 31 18:29:44.169: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c container test-container: <nil>
STEP: delete the pod
Mar 31 18:29:44.249: INFO: Waiting for pod client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c to disappear
Mar 31 18:29:44.280: INFO: Pod client-containers-47e1e139-030e-44bd-9328-d313fbe75c6c no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 31 18:29:44.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2246" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":283,"completed":17,"skipped":242,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Mar 31 18:29:45.456: INFO: created pod pod-service-account-nomountsa-nomountspec
Mar 31 18:29:45.456: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 31 18:29:45.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3447" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":283,"completed":18,"skipped":282,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 31 18:29:47.813: INFO: Initial restart count of pod test-webserver-a9f045f7-8d89-40d2-b556-48b974c60400 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 18:33:49.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5190" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":19,"skipped":284,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 31 18:33:50.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0331 18:33:50.641576   25441 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-2507" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":283,"completed":20,"skipped":294,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Mar 31 18:33:55.325: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-4s6fv" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-4s6fv test-rolling-update-deployment-664dd8fc7f- deployment-339 /api/v1/namespaces/deployment-339/pods/test-rolling-update-deployment-664dd8fc7f-4s6fv a951d4c9-b89d-445b-956b-2fe8c9216e49 3338 0 2020-03-31 18:33:53 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:664dd8fc7f] map[cni.projectcalico.org/podIP:192.168.187.144/32] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f ab12e8e9-430e-4821-a3cc-d3d81a508596 0xc001d66c97 0xc001d66c98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vt5cb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vt5cb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vt5cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:33:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:33:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:33:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 18:33:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.187.144,StartTime:2020-03-31 18:33:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-31 18:33:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://91d919c547cbbeedc55020544d54f27963117bd894e5e7ae40644e4e08f390c0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.187.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 31 18:33:55.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-339" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":21,"skipped":295,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 18:33:55.418: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 31 18:33:55.581: INFO: Waiting up to 5m0s for pod "pod-c6c03329-1865-4582-a13b-814e29fea10d" in namespace "emptydir-2860" to be "Succeeded or Failed"
Mar 31 18:33:55.612: INFO: Pod "pod-c6c03329-1865-4582-a13b-814e29fea10d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.703296ms
Mar 31 18:33:57.642: INFO: Pod "pod-c6c03329-1865-4582-a13b-814e29fea10d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061076149s
STEP: Saw pod success
Mar 31 18:33:57.643: INFO: Pod "pod-c6c03329-1865-4582-a13b-814e29fea10d" satisfied condition "Succeeded or Failed"
Mar 31 18:33:57.672: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-c6c03329-1865-4582-a13b-814e29fea10d container test-container: <nil>
STEP: delete the pod
Mar 31 18:33:57.761: INFO: Waiting for pod pod-c6c03329-1865-4582-a13b-814e29fea10d to disappear
Mar 31 18:33:57.792: INFO: Pod pod-c6c03329-1865-4582-a13b-814e29fea10d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 18:33:57.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2860" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":22,"skipped":310,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 51 lines ...
Mar 31 18:34:15.041: INFO: stderr: ""
Mar 31 18:34:15.041: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 18:34:15.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8269" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":283,"completed":23,"skipped":323,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Service endpoints latency
... skipping 416 lines ...
Mar 31 18:34:25.094: INFO: 99 %ile: 881.416969ms
Mar 31 18:34:25.094: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:175
Mar 31 18:34:25.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9223" for this suite.
•{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":283,"completed":24,"skipped":356,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 18:34:30.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5676" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":283,"completed":25,"skipped":373,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 22 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 18:34:40.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7935" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":283,"completed":26,"skipped":424,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 18:34:40.735: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 18:36:41.006: INFO: Deleting pod "var-expansion-fa610638-cd1b-44c6-9164-6131b28534a1" in namespace "var-expansion-6537"
Mar 31 18:36:41.043: INFO: Wait up to 5m0s for pod "var-expansion-fa610638-cd1b-44c6-9164-6131b28534a1" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 18:36:47.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6537" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":283,"completed":27,"skipped":431,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 31 18:36:56.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-677" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":283,"completed":28,"skipped":452,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 31 18:37:04.759: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 31 18:37:04.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9864" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":283,"completed":29,"skipped":454,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 18:37:05.057: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c" in namespace "projected-1067" to be "Succeeded or Failed"
Mar 31 18:37:05.090: INFO: Pod "downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.018142ms
Mar 31 18:37:07.121: INFO: Pod "downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063611633s
STEP: Saw pod success
Mar 31 18:37:07.121: INFO: Pod "downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c" satisfied condition "Succeeded or Failed"
Mar 31 18:37:07.151: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c container client-container: <nil>
STEP: delete the pod
Mar 31 18:37:07.229: INFO: Waiting for pod downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c to disappear
Mar 31 18:37:07.258: INFO: Pod downwardapi-volume-04944698-86ba-49f2-b725-0e5da48c7c6c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 18:37:07.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1067" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":30,"skipped":459,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Mar 31 18:37:08.716: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 31 18:37:08.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2831" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":283,"completed":31,"skipped":470,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 23 lines ...
Mar 31 18:37:25.342: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Mar 31 18:37:25.374: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 31 18:37:25.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2793" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":283,"completed":32,"skipped":495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 18:37:36.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5244" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":283,"completed":33,"skipped":580,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 8 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 31 18:37:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5420" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":283,"completed":34,"skipped":584,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 8 lines ...
Mar 31 18:37:39.355: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b715403b-2121-46d1-8b17-481f306f479c", Controller:(*bool)(0xc0024a1c86), BlockOwnerDeletion:(*bool)(0xc0024a1c87)}}
Mar 31 18:37:39.390: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e8ac51c4-7677-4741-b91f-4049d12d6924", Controller:(*bool)(0xc002757c46), BlockOwnerDeletion:(*bool)(0xc002757c47)}}
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 31 18:37:44.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5646" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":283,"completed":35,"skipped":586,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 28 lines ...
Mar 31 18:39:01.463: INFO: Terminating ReplicationController wrapped-volume-race-729c352a-c4c6-439a-8451-7049a3e105af pods took: 400.486262ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:175
Mar 31 18:39:17.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2290" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":283,"completed":36,"skipped":602,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 18:39:18.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1" in namespace "projected-1275" to be "Succeeded or Failed"
Mar 31 18:39:18.120: INFO: Pod "downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.660816ms
Mar 31 18:39:20.150: INFO: Pod "downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061203083s
STEP: Saw pod success
Mar 31 18:39:20.150: INFO: Pod "downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1" satisfied condition "Succeeded or Failed"
Mar 31 18:39:20.181: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1 container client-container: <nil>
STEP: delete the pod
Mar 31 18:39:20.280: INFO: Waiting for pod downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1 to disappear
Mar 31 18:39:20.310: INFO: Pod downwardapi-volume-c0ef670b-2ac7-4659-b461-2e03a518fbd1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 18:39:20.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1275" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":37,"skipped":612,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-7119f44d-6bf0-4bb2-ab56-109798e0d992
STEP: Creating a pod to test consume secrets
Mar 31 18:39:20.600: INFO: Waiting up to 5m0s for pod "pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102" in namespace "secrets-7928" to be "Succeeded or Failed"
Mar 31 18:39:20.630: INFO: Pod "pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102": Phase="Pending", Reason="", readiness=false. Elapsed: 30.241895ms
Mar 31 18:39:22.660: INFO: Pod "pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060702381s
STEP: Saw pod success
Mar 31 18:39:22.660: INFO: Pod "pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102" satisfied condition "Succeeded or Failed"
Mar 31 18:39:22.691: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102 container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 18:39:22.767: INFO: Waiting for pod pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102 to disappear
Mar 31 18:39:22.797: INFO: Pod pod-secrets-f2fcda35-4a17-43d9-8151-b78ce427a102 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 18:39:22.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7928" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":38,"skipped":616,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Mar 31 18:39:25.799: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 18:39:25.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8386" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":283,"completed":39,"skipped":633,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 18:39:36.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9204" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":283,"completed":40,"skipped":639,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 31 18:39:36.632: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override all
Mar 31 18:39:36.806: INFO: Waiting up to 5m0s for pod "client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9" in namespace "containers-4092" to be "Succeeded or Failed"
Mar 31 18:39:36.839: INFO: Pod "client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.865325ms
Mar 31 18:39:38.869: INFO: Pod "client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06345796s
STEP: Saw pod success
Mar 31 18:39:38.869: INFO: Pod "client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9" satisfied condition "Succeeded or Failed"
Mar 31 18:39:38.899: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9 container test-container: <nil>
STEP: delete the pod
Mar 31 18:39:38.993: INFO: Waiting for pod client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9 to disappear
Mar 31 18:39:39.024: INFO: Pod client-containers-3de1d1fa-e739-4b17-a54c-dfb3a7eb04a9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 31 18:39:39.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4092" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":283,"completed":41,"skipped":650,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 18:39:41.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4305" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":283,"completed":42,"skipped":665,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 31 18:39:44.213: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:44.246: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:44.484: INFO: Unable to read jessie_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:44.514: INFO: Unable to read jessie_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:44.544: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:44.575: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:44.774: INFO: Lookups using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 failed for: [wheezy_udp@dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_udp@dns-test-service.dns-2346.svc.cluster.local jessie_tcp@dns-test-service.dns-2346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local]

Mar 31 18:39:49.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:49.837: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:49.868: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:49.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:50.127: INFO: Unable to read jessie_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:50.159: INFO: Unable to read jessie_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:50.191: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:50.230: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:50.424: INFO: Lookups using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 failed for: [wheezy_udp@dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_udp@dns-test-service.dns-2346.svc.cluster.local jessie_tcp@dns-test-service.dns-2346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local]

Mar 31 18:39:54.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:54.838: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:54.869: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:54.902: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:55.125: INFO: Unable to read jessie_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:55.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:55.187: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:55.218: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:55.415: INFO: Lookups using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 failed for: [wheezy_udp@dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_udp@dns-test-service.dns-2346.svc.cluster.local jessie_tcp@dns-test-service.dns-2346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local]

Mar 31 18:39:59.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:59.839: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:59.872: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:39:59.904: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:00.129: INFO: Unable to read jessie_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:00.160: INFO: Unable to read jessie_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:00.191: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:00.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:00.421: INFO: Lookups using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 failed for: [wheezy_udp@dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_udp@dns-test-service.dns-2346.svc.cluster.local jessie_tcp@dns-test-service.dns-2346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local]

Mar 31 18:40:04.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:04.837: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:04.867: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:04.898: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:05.126: INFO: Unable to read jessie_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:05.159: INFO: Unable to read jessie_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:05.190: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:05.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:05.414: INFO: Lookups using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 failed for: [wheezy_udp@dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_udp@dns-test-service.dns-2346.svc.cluster.local jessie_tcp@dns-test-service.dns-2346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local]

Mar 31 18:40:09.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:09.838: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:09.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:09.902: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:10.133: INFO: Unable to read jessie_udp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:10.166: INFO: Unable to read jessie_tcp@dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:10.198: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:10.229: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local from pod dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91: the server could not find the requested resource (get pods dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91)
Mar 31 18:40:10.423: INFO: Lookups using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 failed for: [wheezy_udp@dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@dns-test-service.dns-2346.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_udp@dns-test-service.dns-2346.svc.cluster.local jessie_tcp@dns-test-service.dns-2346.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2346.svc.cluster.local]

Mar 31 18:40:15.420: INFO: DNS probes using dns-2346/dns-test-79e0eefb-f529-4e61-95e8-aa74791b5c91 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 18:40:15.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2346" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":283,"completed":43,"skipped":669,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
Mar 31 18:40:23.265: INFO: stderr: ""
Mar 31 18:40:23.265: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 18:40:23.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2128" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":283,"completed":44,"skipped":747,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-secret-dxnh
STEP: Creating a pod to test atomic-volume-subpath
Mar 31 18:40:23.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dxnh" in namespace "subpath-3928" to be "Succeeded or Failed"
Mar 31 18:40:23.619: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Pending", Reason="", readiness=false. Elapsed: 29.01908ms
Mar 31 18:40:25.649: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 2.05962622s
Mar 31 18:40:27.681: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 4.090919454s
Mar 31 18:40:29.714: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 6.123702548s
Mar 31 18:40:31.744: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 8.154373693s
Mar 31 18:40:33.776: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 10.186079179s
Mar 31 18:40:35.806: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 12.216464983s
Mar 31 18:40:37.837: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 14.247504907s
Mar 31 18:40:39.868: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 16.27783828s
Mar 31 18:40:41.898: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 18.307761619s
Mar 31 18:40:43.929: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Running", Reason="", readiness=true. Elapsed: 20.339077707s
Mar 31 18:40:45.959: INFO: Pod "pod-subpath-test-secret-dxnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.369418167s
STEP: Saw pod success
Mar 31 18:40:45.959: INFO: Pod "pod-subpath-test-secret-dxnh" satisfied condition "Succeeded or Failed"
Mar 31 18:40:45.989: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-subpath-test-secret-dxnh container test-container-subpath-secret-dxnh: <nil>
STEP: delete the pod
Mar 31 18:40:46.067: INFO: Waiting for pod pod-subpath-test-secret-dxnh to disappear
Mar 31 18:40:46.098: INFO: Pod pod-subpath-test-secret-dxnh no longer exists
STEP: Deleting pod pod-subpath-test-secret-dxnh
Mar 31 18:40:46.098: INFO: Deleting pod "pod-subpath-test-secret-dxnh" in namespace "subpath-3928"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 31 18:40:46.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3928" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":283,"completed":45,"skipped":751,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-a0969e88-36da-471c-9cf9-ccfefe7db134
STEP: Creating a pod to test consume configMaps
Mar 31 18:40:46.429: INFO: Waiting up to 5m0s for pod "pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf" in namespace "configmap-1241" to be "Succeeded or Failed"
Mar 31 18:40:46.461: INFO: Pod "pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf": Phase="Pending", Reason="", readiness=false. Elapsed: 31.76202ms
Mar 31 18:40:48.491: INFO: Pod "pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061502217s
STEP: Saw pod success
Mar 31 18:40:48.491: INFO: Pod "pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf" satisfied condition "Succeeded or Failed"
Mar 31 18:40:48.520: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 18:40:48.597: INFO: Waiting for pod pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf to disappear
Mar 31 18:40:48.627: INFO: Pod pod-configmaps-fce5bb1f-5f68-4d93-a116-1962d02299bf no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 18:40:48.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1241" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":46,"skipped":761,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 31 18:40:55.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0331 18:40:55.100456   25441 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-4520" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":283,"completed":47,"skipped":761,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 18:40:55.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b" in namespace "downward-api-7149" to be "Succeeded or Failed"
Mar 31 18:40:55.367: INFO: Pod "downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.891914ms
Mar 31 18:40:57.398: INFO: Pod "downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06214214s
Mar 31 18:40:59.429: INFO: Pod "downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092767973s
STEP: Saw pod success
Mar 31 18:40:59.429: INFO: Pod "downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b" satisfied condition "Succeeded or Failed"
Mar 31 18:40:59.459: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b container client-container: <nil>
STEP: delete the pod
Mar 31 18:40:59.559: INFO: Waiting for pod downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b to disappear
Mar 31 18:40:59.592: INFO: Pod downwardapi-volume-d0a58e84-70bf-4995-b6c1-34fc3b4f231b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 18:40:59.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7149" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":48,"skipped":779,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 18:40:59.683: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 31 18:40:59.847: INFO: Waiting up to 5m0s for pod "pod-ee135974-12f4-4232-8978-4426f6219648" in namespace "emptydir-9753" to be "Succeeded or Failed"
Mar 31 18:40:59.878: INFO: Pod "pod-ee135974-12f4-4232-8978-4426f6219648": Phase="Pending", Reason="", readiness=false. Elapsed: 31.175958ms
Mar 31 18:41:01.908: INFO: Pod "pod-ee135974-12f4-4232-8978-4426f6219648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061804662s
STEP: Saw pod success
Mar 31 18:41:01.908: INFO: Pod "pod-ee135974-12f4-4232-8978-4426f6219648" satisfied condition "Succeeded or Failed"
Mar 31 18:41:01.938: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-ee135974-12f4-4232-8978-4426f6219648 container test-container: <nil>
STEP: delete the pod
Mar 31 18:41:02.015: INFO: Waiting for pod pod-ee135974-12f4-4232-8978-4426f6219648 to disappear
Mar 31 18:41:02.046: INFO: Pod pod-ee135974-12f4-4232-8978-4426f6219648 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 18:41:02.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9753" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":49,"skipped":785,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 18:41:02.136: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 31 18:41:02.308: INFO: Waiting up to 5m0s for pod "pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1" in namespace "emptydir-8782" to be "Succeeded or Failed"
Mar 31 18:41:02.339: INFO: Pod "pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.886682ms
Mar 31 18:41:04.369: INFO: Pod "pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061303789s
STEP: Saw pod success
Mar 31 18:41:04.369: INFO: Pod "pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1" satisfied condition "Succeeded or Failed"
Mar 31 18:41:04.401: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1 container test-container: <nil>
STEP: delete the pod
Mar 31 18:41:04.477: INFO: Waiting for pod pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1 to disappear
Mar 31 18:41:04.507: INFO: Pod pod-d1aeb896-216f-46d0-b20e-1b6e63f9d1d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 18:41:04.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8782" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":50,"skipped":804,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 18:41:04.603: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 5 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 31 18:41:05.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721276865, loc:(*time.Location)(0x7b90f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721276865, loc:(*time.Location)(0x7b90f40)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721276865, loc:(*time.Location)(0x7b90f40)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721276865, loc:(*time.Location)(0x7b90f40)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 31 18:41:08.678: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 18:41:08.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7458" for this suite.
STEP: Destroying namespace "webhook-7458-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":283,"completed":51,"skipped":809,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
Mar 31 18:41:23.782: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 18:41:26.589: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 18:41:40.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5065" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":283,"completed":52,"skipped":821,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
• [SLOW TEST:304.991 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":283,"completed":53,"skipped":822,"failed":0}
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] PreStop
... skipping 25 lines ...
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:175
Mar 31 18:46:54.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6806" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":283,"completed":54,"skipped":827,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 31 18:46:54.879: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 18:46:55.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4991" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":283,"completed":55,"skipped":857,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-map-79fe8ded-e492-4a5f-b8d9-d170425cdfe0
STEP: Creating a pod to test consume secrets
Mar 31 18:46:55.404: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea" in namespace "projected-7570" to be "Succeeded or Failed"
Mar 31 18:46:55.437: INFO: Pod "pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea": Phase="Pending", Reason="", readiness=false. Elapsed: 32.712907ms
Mar 31 18:46:57.467: INFO: Pod "pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.06288104s
STEP: Saw pod success
Mar 31 18:46:57.467: INFO: Pod "pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea" satisfied condition "Succeeded or Failed"
Mar 31 18:46:57.497: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 31 18:46:57.589: INFO: Waiting for pod pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea to disappear
Mar 31 18:46:57.621: INFO: Pod pod-projected-secrets-1d1a4f62-64b2-4cb1-a36a-0b4c1ce648ea no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 31 18:46:57.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7570" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":56,"skipped":867,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 31 18:47:00.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3654" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":57,"skipped":871,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] LimitRange
... skipping 31 lines ...
Mar 31 18:47:07.751: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:175
Mar 31 18:47:07.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-3599" for this suite.
•{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":283,"completed":58,"skipped":892,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 27 lines ...
Mar 31 18:47:26.381: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Mar 31 18:47:26.412: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 31 18:47:26.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7239" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":283,"completed":59,"skipped":902,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-1240/configmap-test-04a5e982-dc63-40cc-ac87-a7c0dba22b3d
STEP: Creating a pod to test consume configMaps
Mar 31 18:47:26.709: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5" in namespace "configmap-1240" to be "Succeeded or Failed"
Mar 31 18:47:26.742: INFO: Pod "pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.30209ms
Mar 31 18:47:28.772: INFO: Pod "pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062934358s
STEP: Saw pod success
Mar 31 18:47:28.772: INFO: Pod "pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5" satisfied condition "Succeeded or Failed"
Mar 31 18:47:28.803: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5 container env-test: <nil>
STEP: delete the pod
Mar 31 18:47:28.881: INFO: Waiting for pod pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5 to disappear
Mar 31 18:47:28.911: INFO: Pod pod-configmaps-e6e5aaa5-9044-4a9a-8b0f-9e1b6136c4a5 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 18:47:28.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1240" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":283,"completed":60,"skipped":916,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 31 18:47:29.135: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 31 18:47:32.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7076" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":283,"completed":61,"skipped":918,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 35 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 31 18:47:42.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0331 18:47:42.921755   25441 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-4169" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":283,"completed":62,"skipped":944,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 31 18:47:42.991: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Mar 31 18:47:43.163: INFO: Waiting up to 5m0s for pod "var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285" in namespace "var-expansion-3929" to be "Succeeded or Failed"
Mar 31 18:47:43.196: INFO: Pod "var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285": Phase="Pending", Reason="", readiness=false. Elapsed: 32.309479ms
Mar 31 18:47:45.226: INFO: Pod "var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062844422s
STEP: Saw pod success
Mar 31 18:47:45.226: INFO: Pod "var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285" satisfied condition "Succeeded or Failed"
Mar 31 18:47:45.256: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285 container dapi-container: <nil>
STEP: delete the pod
Mar 31 18:47:45.338: INFO: Waiting for pod var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285 to disappear
Mar 31 18:47:45.369: INFO: Pod var-expansion-8d3a8912-d954-46d2-8d46-ae01050fa285 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 18:47:45.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3929" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":283,"completed":63,"skipped":946,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 18:47:50.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3139" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":283,"completed":64,"skipped":973,"failed":0}
SSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 18:47:50.520: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 31 18:47:56.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5536" for this suite.
•{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":283,"completed":65,"skipped":976,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-2cbc6065-89ae-4bbd-9d8d-b16063917062
STEP: Creating a pod to test consume secrets
Mar 31 18:47:57.015: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5" in namespace "projected-3745" to be "Succeeded or Failed"
Mar 31 18:47:57.050: INFO: Pod "pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.589702ms
Mar 31 18:47:59.081: INFO: Pod "pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.065620622s
STEP: Saw pod success
Mar 31 18:47:59.081: INFO: Pod "pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5" satisfied condition "Succeeded or Failed"
Mar 31 18:47:59.111: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5 container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 31 18:47:59.187: INFO: Waiting for pod pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5 to disappear
Mar 31 18:47:59.218: INFO: Pod pod-projected-secrets-e4d57162-2ecd-497b-97d1-2742ec10caf5 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 31 18:47:59.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3745" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":66,"skipped":992,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-c4ed7a31-0679-4597-8183-d182d7d2304e
STEP: Creating a pod to test consume configMaps
Mar 31 18:47:59.514: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6" in namespace "projected-4214" to be "Succeeded or Failed"
Mar 31 18:47:59.547: INFO: Pod "pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.704635ms
Mar 31 18:48:01.578: INFO: Pod "pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064254304s
STEP: Saw pod success
Mar 31 18:48:01.578: INFO: Pod "pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6" satisfied condition "Succeeded or Failed"
Mar 31 18:48:01.608: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 18:48:01.685: INFO: Waiting for pod pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6 to disappear
Mar 31 18:48:01.716: INFO: Pod pod-projected-configmaps-c21d0aef-770c-4bf9-a38c-6b5ab8e29ee6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 18:48:01.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4214" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":67,"skipped":1017,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-f5a633ca-9df0-4745-81e4-d9a635cf2925
STEP: Creating a pod to test consume secrets
Mar 31 18:48:02.025: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443" in namespace "projected-7861" to be "Succeeded or Failed"
Mar 31 18:48:02.059: INFO: Pod "pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443": Phase="Pending", Reason="", readiness=false. Elapsed: 33.992371ms
Mar 31 18:48:04.090: INFO: Pod "pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064968646s
STEP: Saw pod success
Mar 31 18:48:04.090: INFO: Pod "pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443" satisfied condition "Succeeded or Failed"
Mar 31 18:48:04.121: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443 container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 18:48:04.198: INFO: Waiting for pod pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443 to disappear
Mar 31 18:48:04.230: INFO: Pod pod-projected-secrets-d68ee31b-6ca9-478f-9fba-c78452585443 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 31 18:48:04.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7861" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":68,"skipped":1018,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 18:48:04.326: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Mar 31 18:48:04.494: INFO: Waiting up to 5m0s for pod "pod-683dd983-85b1-44dd-80c4-fecc2fac0d95" in namespace "emptydir-5093" to be "Succeeded or Failed"
Mar 31 18:48:04.527: INFO: Pod "pod-683dd983-85b1-44dd-80c4-fecc2fac0d95": Phase="Pending", Reason="", readiness=false. Elapsed: 32.667089ms
Mar 31 18:48:06.557: INFO: Pod "pod-683dd983-85b1-44dd-80c4-fecc2fac0d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062746073s
STEP: Saw pod success
Mar 31 18:48:06.557: INFO: Pod "pod-683dd983-85b1-44dd-80c4-fecc2fac0d95" satisfied condition "Succeeded or Failed"
Mar 31 18:48:06.587: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-683dd983-85b1-44dd-80c4-fecc2fac0d95 container test-container: <nil>
STEP: delete the pod
Mar 31 18:48:06.670: INFO: Waiting for pod pod-683dd983-85b1-44dd-80c4-fecc2fac0d95 to disappear
Mar 31 18:48:06.701: INFO: Pod pod-683dd983-85b1-44dd-80c4-fecc2fac0d95 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 18:48:06.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5093" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":69,"skipped":1030,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 31 18:48:11.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1936" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":283,"completed":70,"skipped":1054,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 31 18:48:11.530: INFO: stderr: ""
Mar 31 18:48:11.530: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd.projectcalico.org/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 18:48:11.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7075" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":283,"completed":71,"skipped":1096,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 85 lines ...
Mar 31 18:49:16.709: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Mar 31 18:49:16.709: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 31 18:49:16.709: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Mar 31 18:49:16.709: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:49:17.078: INFO: rc: 1
Mar 31 18:49:17.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 31 18:49:27.078: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:49:27.327: INFO: rc: 1
Mar 31 18:49:27.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:49:37.328: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:49:37.597: INFO: rc: 1
Mar 31 18:49:37.597: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:49:47.597: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:49:47.857: INFO: rc: 1
Mar 31 18:49:47.857: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:49:57.857: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:49:58.095: INFO: rc: 1
Mar 31 18:49:58.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:50:08.095: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:50:08.340: INFO: rc: 1
Mar 31 18:50:08.340: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:50:18.341: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:50:18.574: INFO: rc: 1
Mar 31 18:50:18.574: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:50:28.575: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:50:28.823: INFO: rc: 1
Mar 31 18:50:28.823: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:50:38.824: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:50:39.069: INFO: rc: 1
Mar 31 18:50:39.070: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:50:49.070: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:50:49.330: INFO: rc: 1
Mar 31 18:50:49.330: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:50:59.330: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:50:59.571: INFO: rc: 1
Mar 31 18:50:59.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:51:09.571: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:51:09.811: INFO: rc: 1
Mar 31 18:51:09.811: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:51:19.812: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:51:20.058: INFO: rc: 1
Mar 31 18:51:20.058: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:51:30.059: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:51:30.299: INFO: rc: 1
Mar 31 18:51:30.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:51:40.299: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:51:40.550: INFO: rc: 1
Mar 31 18:51:40.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:51:50.551: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:51:50.800: INFO: rc: 1
Mar 31 18:51:50.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:52:00.801: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:52:01.040: INFO: rc: 1
Mar 31 18:52:01.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:52:11.040: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:52:11.275: INFO: rc: 1
Mar 31 18:52:11.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:52:21.276: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:52:21.512: INFO: rc: 1
Mar 31 18:52:21.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:52:31.513: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:52:31.755: INFO: rc: 1
Mar 31 18:52:31.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:52:41.756: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:52:41.999: INFO: rc: 1
Mar 31 18:52:41.999: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:52:52.000: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:52:52.241: INFO: rc: 1
Mar 31 18:52:52.241: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:53:02.241: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:53:02.481: INFO: rc: 1
Mar 31 18:53:02.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:53:12.482: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:53:12.734: INFO: rc: 1
Mar 31 18:53:12.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:53:22.734: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:53:22.978: INFO: rc: 1
Mar 31 18:53:22.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:53:32.978: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:53:33.220: INFO: rc: 1
Mar 31 18:53:33.220: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:53:43.221: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:53:43.461: INFO: rc: 1
Mar 31 18:53:43.461: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:53:53.461: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:53:53.701: INFO: rc: 1
Mar 31 18:53:53.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:54:03.701: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:54:03.944: INFO: rc: 1
Mar 31 18:54:03.944: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:54:13.944: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:54:14.181: INFO: rc: 1
Mar 31 18:54:14.181: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Mar 31 18:54:24.182: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-7829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:54:24.419: INFO: rc: 1
Mar 31 18:54:24.419: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Mar 31 18:54:24.419: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":283,"completed":72,"skipped":1108,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 10 lines ...
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 18:54:44.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1576" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":283,"completed":73,"skipped":1113,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 83 lines ...
Mar 31 18:55:05.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6392/pods","resourceVersion":"11278"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 31 18:55:05.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6392" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":283,"completed":74,"skipped":1125,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 19 lines ...
Mar 31 18:55:08.976: INFO: Deleting pod "var-expansion-b5a5a5f2-390e-4287-bac8-5e1e8bfef5e6" in namespace "var-expansion-2600"
Mar 31 18:55:09.012: INFO: Wait up to 5m0s for pod "var-expansion-b5a5a5f2-390e-4287-bac8-5e1e8bfef5e6" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 18:55:45.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2600" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":283,"completed":75,"skipped":1138,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 32.104743ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 31 18:55:45.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2110" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":283,"completed":76,"skipped":1140,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-71cc80e7-35d5-4ee5-9c6a-c000b030dc88
STEP: Creating a pod to test consume secrets
Mar 31 18:55:46.264: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec" in namespace "projected-3094" to be "Succeeded or Failed"
Mar 31 18:55:46.295: INFO: Pod "pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec": Phase="Pending", Reason="", readiness=false. Elapsed: 31.196057ms
Mar 31 18:55:48.326: INFO: Pod "pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061248126s
STEP: Saw pod success
Mar 31 18:55:48.326: INFO: Pod "pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec" satisfied condition "Succeeded or Failed"
Mar 31 18:55:48.357: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec container projected-secret-volume-test: <nil>
STEP: delete the pod
Mar 31 18:55:48.447: INFO: Waiting for pod pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec to disappear
Mar 31 18:55:48.478: INFO: Pod pod-projected-secrets-cc16c3ed-7036-4c5b-8856-d9f3d1015aec no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 31 18:55:48.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3094" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":77,"skipped":1144,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-7e38c681-907b-439e-b3c2-cdc350232a56
STEP: Creating a pod to test consume configMaps
Mar 31 18:55:48.773: INFO: Waiting up to 5m0s for pod "pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b" in namespace "configmap-7427" to be "Succeeded or Failed"
Mar 31 18:55:48.804: INFO: Pod "pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.848164ms
Mar 31 18:55:50.835: INFO: Pod "pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061453299s
STEP: Saw pod success
Mar 31 18:55:50.835: INFO: Pod "pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b" satisfied condition "Succeeded or Failed"
Mar 31 18:55:50.866: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 18:55:50.952: INFO: Waiting for pod pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b to disappear
Mar 31 18:55:50.984: INFO: Pod pod-configmaps-62d9a6e8-1500-4c34-af53-20feab40426b no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 18:55:50.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7427" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":78,"skipped":1176,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 31 18:55:53.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5231" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":283,"completed":79,"skipped":1180,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 18:56:09.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3176" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":283,"completed":80,"skipped":1183,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 132 lines ...
Mar 31 18:57:03.660: INFO: ss-2  test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-31 18:56:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-31 18:56:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-31 18:56:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-31 18:56:30 +0000 UTC  }]
Mar 31 18:57:03.660: INFO: 
Mar 31 18:57:03.660: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9312
Mar 31 18:57:04.692: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:57:05.042: INFO: rc: 1
Mar 31 18:57:05.043: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Mar 31 18:57:15.043: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:57:15.288: INFO: rc: 1
Mar 31 18:57:15.288: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:57:25.288: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:57:25.536: INFO: rc: 1
Mar 31 18:57:25.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:57:35.537: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:57:35.793: INFO: rc: 1
Mar 31 18:57:35.793: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:57:45.793: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:57:46.040: INFO: rc: 1
Mar 31 18:57:46.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:57:56.040: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:57:56.281: INFO: rc: 1
Mar 31 18:57:56.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:58:06.281: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:58:06.524: INFO: rc: 1
Mar 31 18:58:06.524: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:58:16.524: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:58:16.834: INFO: rc: 1
Mar 31 18:58:16.834: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:58:26.834: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:58:27.360: INFO: rc: 1
Mar 31 18:58:27.360: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:58:37.361: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:58:37.624: INFO: rc: 1
Mar 31 18:58:37.624: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:58:47.624: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:58:47.891: INFO: rc: 1
Mar 31 18:58:47.891: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:58:57.891: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:58:58.142: INFO: rc: 1
Mar 31 18:58:58.142: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:59:08.143: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:59:08.418: INFO: rc: 1
Mar 31 18:59:08.418: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:59:18.419: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:59:18.691: INFO: rc: 1
Mar 31 18:59:18.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:59:28.692: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:59:28.959: INFO: rc: 1
Mar 31 18:59:28.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:59:38.959: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:59:39.224: INFO: rc: 1
Mar 31 18:59:39.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:59:49.224: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:59:49.492: INFO: rc: 1
Mar 31 18:59:49.492: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 18:59:59.492: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 18:59:59.734: INFO: rc: 1
Mar 31 18:59:59.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:00:09.734: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:00:09.981: INFO: rc: 1
Mar 31 19:00:09.981: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:00:19.981: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:00:20.230: INFO: rc: 1
Mar 31 19:00:20.231: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:00:30.231: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:00:30.479: INFO: rc: 1
Mar 31 19:00:30.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:00:40.479: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:00:40.734: INFO: rc: 1
Mar 31 19:00:40.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:00:50.734: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:00:50.980: INFO: rc: 1
Mar 31 19:00:50.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:01:00.980: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:01:01.247: INFO: rc: 1
Mar 31 19:01:01.247: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:01:11.248: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:01:11.498: INFO: rc: 1
Mar 31 19:01:11.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:01:21.498: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:01:21.744: INFO: rc: 1
Mar 31 19:01:21.744: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:01:31.745: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:01:31.985: INFO: rc: 1
Mar 31 19:01:31.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:01:41.985: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:01:42.227: INFO: rc: 1
Mar 31 19:01:42.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:01:52.227: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:01:52.468: INFO: rc: 1
Mar 31 19:01:52.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:02:02.468: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:02:02.707: INFO: rc: 1
Mar 31 19:02:02.707: INFO: Waiting 10s to retry failed RunHostCmd: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Mar 31 19:02:12.707: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig exec --namespace=statefulset-9312 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 31 19:02:12.945: INFO: rc: 1
Mar 31 19:02:12.946: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Mar 31 19:02:12.946: INFO: Scaling statefulset ss to 0
Mar 31 19:02:13.054: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 13 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":283,"completed":81,"skipped":1219,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:02:24.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4652" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":283,"completed":82,"skipped":1235,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Mar 31 19:02:25.261: INFO: stderr: ""
Mar 31 19:02:25.261: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://34.98.69.39:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://34.98.69.39:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:02:25.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4900" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":283,"completed":83,"skipped":1242,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 19:02:25.517: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b77e3a27-28b1-4389-a943-2759a9631c4b" in namespace "security-context-test-8941" to be "Succeeded or Failed"
Mar 31 19:02:25.548: INFO: Pod "busybox-user-65534-b77e3a27-28b1-4389-a943-2759a9631c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.203156ms
Mar 31 19:02:27.579: INFO: Pod "busybox-user-65534-b77e3a27-28b1-4389-a943-2759a9631c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061910412s
Mar 31 19:02:27.579: INFO: Pod "busybox-user-65534-b77e3a27-28b1-4389-a943-2759a9631c4b" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 31 19:02:27.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8941" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":84,"skipped":1252,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-f7c263f0-85a3-491e-b595-2305a701eef1
STEP: Creating a pod to test consume configMaps
Mar 31 19:02:27.868: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c" in namespace "projected-6810" to be "Succeeded or Failed"
Mar 31 19:02:27.899: INFO: Pod "pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.869598ms
Mar 31 19:02:29.934: INFO: Pod "pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066166616s
STEP: Saw pod success
Mar 31 19:02:29.934: INFO: Pod "pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c" satisfied condition "Succeeded or Failed"
Mar 31 19:02:29.965: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:02:30.057: INFO: Waiting for pod pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c to disappear
Mar 31 19:02:30.087: INFO: Pod pod-projected-configmaps-8329f688-2ef2-4b5b-90ef-083dff2fe23c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:02:30.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6810" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":283,"completed":85,"skipped":1268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 31 19:02:30.317: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:02:30.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3591" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":283,"completed":86,"skipped":1307,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-8752/configmap-test-7b530e18-a28f-4e42-a8de-ee67ef67c5b3
STEP: Creating a pod to test consume configMaps
Mar 31 19:02:30.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b" in namespace "configmap-8752" to be "Succeeded or Failed"
Mar 31 19:02:30.850: INFO: Pod "pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.450535ms
Mar 31 19:02:32.882: INFO: Pod "pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063991048s
STEP: Saw pod success
Mar 31 19:02:32.882: INFO: Pod "pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b" satisfied condition "Succeeded or Failed"
Mar 31 19:02:32.912: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b container env-test: <nil>
STEP: delete the pod
Mar 31 19:02:33.004: INFO: Waiting for pod pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b to disappear
Mar 31 19:02:33.034: INFO: Pod pod-configmaps-76ab7db2-305f-47ed-b002-1c72a2d7cb6b no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:02:33.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8752" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":87,"skipped":1322,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 24 lines ...
Mar 31 19:02:35.062: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 31 19:02:35.062: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:02:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5962" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":283,"completed":88,"skipped":1350,"failed":0}

------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 19:02:45.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2506" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":283,"completed":89,"skipped":1350,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 31 19:02:46.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0331 19:02:46.845355   25441 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-2438" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":283,"completed":90,"skipped":1365,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Mar 31 19:02:49.851: INFO: stderr: ""
Mar 31 19:02:49.851: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:02:49.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1088" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":283,"completed":91,"skipped":1379,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 22 lines ...
Mar 31 19:03:30.533: INFO: Waiting for statefulset status.replicas updated to 0
Mar 31 19:03:30.564: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 31 19:03:30.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3459" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":283,"completed":92,"skipped":1383,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 26 lines ...
Mar 31 19:04:21.250: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2322 /api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-b 329bfaff-819c-4b18-a0e3-c97dbc0eefd7 13092 0 2020-03-31 19:04:11 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 31 19:04:21.250: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2322 /api/v1/namespaces/watch-2322/configmaps/e2e-watch-test-configmap-b 329bfaff-819c-4b18-a0e3-c97dbc0eefd7 13092 0 2020-03-31 19:04:11 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 31 19:04:31.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2322" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":283,"completed":93,"skipped":1409,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 3 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  test/e2e/common/pods.go:180
[It] should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 19:04:33.683: INFO: Waiting up to 5m0s for pod "client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3" in namespace "pods-9470" to be "Succeeded or Failed"
Mar 31 19:04:33.722: INFO: Pod "client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 39.632751ms
Mar 31 19:04:35.753: INFO: Pod "client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070510483s
STEP: Saw pod success
Mar 31 19:04:35.753: INFO: Pod "client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3" satisfied condition "Succeeded or Failed"
Mar 31 19:04:35.783: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3 container env3cont: <nil>
STEP: delete the pod
Mar 31 19:04:35.868: INFO: Waiting for pod client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3 to disappear
Mar 31 19:04:35.899: INFO: Pod client-envvars-09648352-6696-4fc4-8f94-4b3b10efb2f3 no longer exists
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 19:04:35.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9470" for this suite.
•{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":283,"completed":94,"skipped":1435,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 20 lines ...
STEP: Waiting some time to make sure that toleration time passed.
Mar 31 19:06:51.722: INFO: Pod wasn't evicted. Test successful
[AfterEach] [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:06:51.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "taint-single-pod-7809" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":283,"completed":95,"skipped":1468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-4643788c-4b7c-4520-b474-3e321d76796e
STEP: Creating a pod to test consume configMaps
Mar 31 19:06:52.015: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0" in namespace "projected-6640" to be "Succeeded or Failed"
Mar 31 19:06:52.046: INFO: Pod "pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.888894ms
Mar 31 19:06:54.076: INFO: Pod "pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061366381s
STEP: Saw pod success
Mar 31 19:06:54.076: INFO: Pod "pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0" satisfied condition "Succeeded or Failed"
Mar 31 19:06:54.106: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:06:54.193: INFO: Waiting for pod pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0 to disappear
Mar 31 19:06:54.223: INFO: Pod pod-projected-configmaps-48e15199-fa64-47bc-a850-777598da8ba0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:06:54.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6640" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":96,"skipped":1501,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 12 lines ...
Mar 31 19:06:56.932: INFO: Terminating Job.batch foo pods took: 300.326815ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 31 19:07:35.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4895" for this suite.
•{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":283,"completed":97,"skipped":1512,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 2 lines ...
Mar 31 19:07:35.256: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Mar 31 19:07:36.516: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 31 19:07:36.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7353" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":98,"skipped":1513,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:07:36.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5" in namespace "downward-api-69" to be "Succeeded or Failed"
Mar 31 19:07:36.920: INFO: Pod "downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5": Phase="Pending", Reason="", readiness=false. Elapsed: 75.860708ms
Mar 31 19:07:38.952: INFO: Pod "downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.107589539s
STEP: Saw pod success
Mar 31 19:07:38.952: INFO: Pod "downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5" satisfied condition "Succeeded or Failed"
Mar 31 19:07:38.981: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5 container client-container: <nil>
STEP: delete the pod
Mar 31 19:07:39.071: INFO: Waiting for pod downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5 to disappear
Mar 31 19:07:39.101: INFO: Pod downwardapi-volume-216e169c-18ba-4dba-9d5b-38a5fd5dddf5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:07:39.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-69" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":99,"skipped":1515,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 31 19:07:42.092: INFO: Successfully updated pod "annotationupdatefe5b0da4-a794-4c9a-a251-c6336999786c"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:07:46.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7780" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":100,"skipped":1533,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 84 lines ...
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fhwl2 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-fhwl2 5d05502f-a197-4b49-adcc-377aef548668 14073 0 2020-03-31 19:07:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.187.141/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc002953db0 0xc002953db1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.187.141,StartTime:2020-03-31 19:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.187.141,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 31 19:07:53.546: INFO: Pod "webserver-deployment-c7997dcc8-hcv22" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hcv22 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-hcv22 73ea8317-3c85-473d-baff-251558b7b39c 14103 0 2020-03-31 19:07:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc002953f40 0xc002953f41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 31 19:07:53.546: INFO: Pod "webserver-deployment-c7997dcc8-k6jr7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k6jr7 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-k6jr7 08003bd3-7458-41f4-bf4d-ec07120cb7a9 14090 0 2020-03-31 19:07:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc000e8e050 0xc000e8e051}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 31 19:07:53.546: INFO: Pod "webserver-deployment-c7997dcc8-k7d7w" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k7d7w webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-k7d7w 39cc3ac4-cefd-41b1-93da-cd854875fefa 14014 0 2020-03-31 19:07:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.234.229/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc000e8e8e0 0xc000e8e8e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.234.229,StartTime:2020-03-31 19:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.234.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 31 19:07:53.547: INFO: Pod "webserver-deployment-c7997dcc8-l6h58" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l6h58 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-l6h58 4c896d51-b9cc-461d-93be-f893ec263cd6 14106 0 2020-03-31 19:07:50 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[cni.projectcalico.org/podIP:192.168.187.142/32] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc000e8eb50 0xc000e8eb51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:192.168.187.142,StartTime:2020-03-31 19:07:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.187.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 31 19:07:53.547: INFO: Pod "webserver-deployment-c7997dcc8-m6hv8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m6hv8 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-m6hv8 dc69ea95-dc78-4e5b-bf46-8efc9a412324 14085 0 2020-03-31 19:07:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc000e8efe0 0xc000e8efe1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Mar 31 19:07:53.547: INFO: Pod "webserver-deployment-c7997dcc8-mb5z9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mb5z9 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-mb5z9 2921fade-5a1d-45c0-bfad-75f1508823bd 14122 0 2020-03-31 19:07:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc000e8f1d0 0xc000e8f1d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-31 19:07:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 6 lines ...
Mar 31 19:07:53.548: INFO: Pod "webserver-deployment-c7997dcc8-zgkd4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zgkd4 webserver-deployment-c7997dcc8- deployment-3448 /api/v1/namespaces/deployment-3448/pods/webserver-deployment-c7997dcc8-zgkd4 059d192b-d1ff-461d-b2b3-7dcdc6c98d2c 14083 0 2020-03-31 19:07:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 42f4b3cd-0fea-45c3-8758-9bc174468ff0 0xc000e8f720 0xc000e8f721}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p4758,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p4758,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p4758,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 31 19:07:53.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3448" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":283,"completed":101,"skipped":1546,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:07:53.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b" in namespace "downward-api-2757" to be "Succeeded or Failed"
Mar 31 19:07:53.826: INFO: Pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.848905ms
Mar 31 19:07:55.857: INFO: Pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06181278s
Mar 31 19:07:57.890: INFO: Pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094815178s
Mar 31 19:07:59.921: INFO: Pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125821128s
Mar 31 19:08:01.952: INFO: Pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156373043s
STEP: Saw pod success
Mar 31 19:08:01.952: INFO: Pod "downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b" satisfied condition "Succeeded or Failed"
Mar 31 19:08:01.982: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b container client-container: <nil>
STEP: delete the pod
Mar 31 19:08:02.058: INFO: Waiting for pod downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b to disappear
Mar 31 19:08:02.089: INFO: Pod downwardapi-volume-53bb770c-6b61-418e-8ba2-f84574202b2b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:08:02.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2757" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":102,"skipped":1599,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 19:08:02.353: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-44c29d69-effe-45e5-807e-b21ae72d2900" in namespace "security-context-test-6643" to be "Succeeded or Failed"
Mar 31 19:08:02.383: INFO: Pod "alpine-nnp-false-44c29d69-effe-45e5-807e-b21ae72d2900": Phase="Pending", Reason="", readiness=false. Elapsed: 30.518532ms
Mar 31 19:08:04.415: INFO: Pod "alpine-nnp-false-44c29d69-effe-45e5-807e-b21ae72d2900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062119949s
Mar 31 19:08:06.446: INFO: Pod "alpine-nnp-false-44c29d69-effe-45e5-807e-b21ae72d2900": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093138192s
Mar 31 19:08:08.479: INFO: Pod "alpine-nnp-false-44c29d69-effe-45e5-807e-b21ae72d2900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.125932746s
Mar 31 19:08:08.479: INFO: Pod "alpine-nnp-false-44c29d69-effe-45e5-807e-b21ae72d2900" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 31 19:08:08.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6643" for this suite.
•{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":103,"skipped":1610,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-027e9aa8-6fc8-4f6b-8699-7dcad37aaece
STEP: Creating a pod to test consume configMaps
Mar 31 19:08:08.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5" in namespace "configmap-2861" to be "Succeeded or Failed"
Mar 31 19:08:08.827: INFO: Pod "pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.534569ms
Mar 31 19:08:10.858: INFO: Pod "pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058875922s
STEP: Saw pod success
Mar 31 19:08:10.858: INFO: Pod "pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5" satisfied condition "Succeeded or Failed"
Mar 31 19:08:10.891: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:08:10.975: INFO: Waiting for pod pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5 to disappear
Mar 31 19:08:11.004: INFO: Pod pod-configmaps-4615a792-59d7-4a9a-b032-67ee19cae9d5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:08:11.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2861" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":104,"skipped":1615,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:08:18.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-943" for this suite.
STEP: Destroying namespace "webhook-943-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":283,"completed":105,"skipped":1623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 16 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:08:31.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7254" for this suite.
STEP: Destroying namespace "nsdeletetest-1800" for this suite.
Mar 31 19:08:32.093: INFO: Namespace nsdeletetest-1800 was already deleted
STEP: Destroying namespace "nsdeletetest-465" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":283,"completed":106,"skipped":1676,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:08:32.128: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Mar 31 19:08:32.295: INFO: Waiting up to 5m0s for pod "pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2" in namespace "emptydir-589" to be "Succeeded or Failed"
Mar 31 19:08:32.327: INFO: Pod "pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.531694ms
Mar 31 19:08:34.357: INFO: Pod "pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061592531s
STEP: Saw pod success
Mar 31 19:08:34.357: INFO: Pod "pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2" satisfied condition "Succeeded or Failed"
Mar 31 19:08:34.387: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2 container test-container: <nil>
STEP: delete the pod
Mar 31 19:08:34.466: INFO: Waiting for pod pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2 to disappear
Mar 31 19:08:34.497: INFO: Pod pod-33ace74b-48dc-4f94-b1a4-8ad13e2953f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:08:34.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-589" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":107,"skipped":1690,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Mar 31 19:08:37.510: INFO: Unable to read jessie_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:37.543: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:37.574: INFO: Unable to read jessie_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:37.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:37.637: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:37.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:37.857: INFO: Lookups using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9185 wheezy_tcp@dns-test-service.dns-9185 wheezy_udp@dns-test-service.dns-9185.svc wheezy_tcp@dns-test-service.dns-9185.svc wheezy_udp@_http._tcp.dns-test-service.dns-9185.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9185.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9185 jessie_tcp@dns-test-service.dns-9185 jessie_udp@dns-test-service.dns-9185.svc jessie_tcp@dns-test-service.dns-9185.svc jessie_udp@_http._tcp.dns-test-service.dns-9185.svc jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc]

Mar 31 19:08:42.893: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:42.925: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:42.956: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:42.989: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.020: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
... skipping 5 lines ...
Mar 31 19:08:43.419: INFO: Unable to read jessie_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.451: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.483: INFO: Unable to read jessie_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.515: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.546: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.578: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:43.783: INFO: Lookups using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9185 wheezy_tcp@dns-test-service.dns-9185 wheezy_udp@dns-test-service.dns-9185.svc wheezy_tcp@dns-test-service.dns-9185.svc wheezy_udp@_http._tcp.dns-test-service.dns-9185.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9185.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9185 jessie_tcp@dns-test-service.dns-9185 jessie_udp@dns-test-service.dns-9185.svc jessie_tcp@dns-test-service.dns-9185.svc jessie_udp@_http._tcp.dns-test-service.dns-9185.svc jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc]

Mar 31 19:08:47.890: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:47.922: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:47.954: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:47.986: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.019: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
... skipping 5 lines ...
Mar 31 19:08:48.407: INFO: Unable to read jessie_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.438: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.470: INFO: Unable to read jessie_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.535: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.567: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:48.755: INFO: Lookups using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9185 wheezy_tcp@dns-test-service.dns-9185 wheezy_udp@dns-test-service.dns-9185.svc wheezy_tcp@dns-test-service.dns-9185.svc wheezy_udp@_http._tcp.dns-test-service.dns-9185.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9185.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9185 jessie_tcp@dns-test-service.dns-9185 jessie_udp@dns-test-service.dns-9185.svc jessie_tcp@dns-test-service.dns-9185.svc jessie_udp@_http._tcp.dns-test-service.dns-9185.svc jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc]

Mar 31 19:08:52.890: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:52.922: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:52.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:52.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.016: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
... skipping 5 lines ...
Mar 31 19:08:53.399: INFO: Unable to read jessie_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.463: INFO: Unable to read jessie_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.494: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.525: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.557: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:53.758: INFO: Lookups using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9185 wheezy_tcp@dns-test-service.dns-9185 wheezy_udp@dns-test-service.dns-9185.svc wheezy_tcp@dns-test-service.dns-9185.svc wheezy_udp@_http._tcp.dns-test-service.dns-9185.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9185.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9185 jessie_tcp@dns-test-service.dns-9185 jessie_udp@dns-test-service.dns-9185.svc jessie_tcp@dns-test-service.dns-9185.svc jessie_udp@_http._tcp.dns-test-service.dns-9185.svc jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc]

Mar 31 19:08:57.888: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:57.920: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:57.951: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:57.983: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.013: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
... skipping 5 lines ...
Mar 31 19:08:58.400: INFO: Unable to read jessie_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.462: INFO: Unable to read jessie_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.526: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.558: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:08:58.752: INFO: Lookups using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9185 wheezy_tcp@dns-test-service.dns-9185 wheezy_udp@dns-test-service.dns-9185.svc wheezy_tcp@dns-test-service.dns-9185.svc wheezy_udp@_http._tcp.dns-test-service.dns-9185.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9185.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9185 jessie_tcp@dns-test-service.dns-9185 jessie_udp@dns-test-service.dns-9185.svc jessie_tcp@dns-test-service.dns-9185.svc jessie_udp@_http._tcp.dns-test-service.dns-9185.svc jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc]

Mar 31 19:09:02.890: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:02.922: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:02.954: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:02.986: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.019: INFO: Unable to read wheezy_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
... skipping 5 lines ...
Mar 31 19:09:03.401: INFO: Unable to read jessie_udp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.432: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185 from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.464: INFO: Unable to read jessie_udp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.527: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.558: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc from pod dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5: the server could not find the requested resource (get pods dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5)
Mar 31 19:09:03.751: INFO: Lookups using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9185 wheezy_tcp@dns-test-service.dns-9185 wheezy_udp@dns-test-service.dns-9185.svc wheezy_tcp@dns-test-service.dns-9185.svc wheezy_udp@_http._tcp.dns-test-service.dns-9185.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9185.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9185 jessie_tcp@dns-test-service.dns-9185 jessie_udp@dns-test-service.dns-9185.svc jessie_tcp@dns-test-service.dns-9185.svc jessie_udp@_http._tcp.dns-test-service.dns-9185.svc jessie_tcp@_http._tcp.dns-test-service.dns-9185.svc]

Mar 31 19:09:08.758: INFO: DNS probes using dns-9185/dns-test-079ac4fd-7d3a-4105-9788-5d17d53393b5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 19:09:08.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9185" for this suite.
•{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":283,"completed":108,"skipped":1699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 19:09:11.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4378" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":283,"completed":109,"skipped":1722,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 19:09:11.699: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:135
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Mar 31 19:09:12.067: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:12.067: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:12.067: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 6 lines ...
Mar 31 19:09:13.190: INFO: Node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal is running more than one daemon pod
Mar 31 19:09:14.157: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:14.157: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:14.157: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:14.187: INFO: Number of nodes with available pods: 2
Mar 31 19:09:14.187: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Mar 31 19:09:14.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:14.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:14.315: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:14.346: INFO: Number of nodes with available pods: 1
Mar 31 19:09:14.346: INFO: Node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal is running more than one daemon pod
Mar 31 19:09:15.403: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
... skipping 3 lines ...
Mar 31 19:09:15.433: INFO: Node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal is running more than one daemon pod
Mar 31 19:09:16.403: INFO: DaemonSet pods can't tolerate node test1-controlplane-0.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:16.403: INFO: DaemonSet pods can't tolerate node test1-controlplane-1.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:16.403: INFO: DaemonSet pods can't tolerate node test1-controlplane-2.c.k8s-gce-serial-1-5.internal with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Mar 31 19:09:16.434: INFO: Number of nodes with available pods: 2
Mar 31 19:09:16.434: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:101
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2023, will wait for the garbage collector to delete the pods
Mar 31 19:09:16.613: INFO: Deleting DaemonSet.extensions daemon-set took: 34.844109ms
Mar 31 19:09:16.914: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.360841ms
... skipping 4 lines ...
Mar 31 19:09:25.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2023/pods","resourceVersion":"15166"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:09:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2023" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":283,"completed":110,"skipped":1732,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:09:28.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5376" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":111,"skipped":1738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:09:43.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7344" for this suite.
STEP: Destroying namespace "webhook-7344-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":283,"completed":112,"skipped":1785,"failed":0}

------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 31 19:09:45.970: INFO: Trying to dial the pod
Mar 31 19:09:51.062: INFO: Controller my-hostname-basic-667ee183-9b15-4e7c-838d-19622780fe21: Got expected result from replica 1 [my-hostname-basic-667ee183-9b15-4e7c-838d-19622780fe21-sl5r5]: "my-hostname-basic-667ee183-9b15-4e7c-838d-19622780fe21-sl5r5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 31 19:09:51.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5850" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":113,"skipped":1785,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 19:09:51.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1998" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":283,"completed":114,"skipped":1797,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:09:51.554: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340" in namespace "projected-2983" to be "Succeeded or Failed"
Mar 31 19:09:51.583: INFO: Pod "downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340": Phase="Pending", Reason="", readiness=false. Elapsed: 29.627208ms
Mar 31 19:09:53.614: INFO: Pod "downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060086933s
STEP: Saw pod success
Mar 31 19:09:53.614: INFO: Pod "downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340" satisfied condition "Succeeded or Failed"
Mar 31 19:09:53.644: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340 container client-container: <nil>
STEP: delete the pod
Mar 31 19:09:53.724: INFO: Waiting for pod downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340 to disappear
Mar 31 19:09:53.754: INFO: Pod downwardapi-volume-df1c2e6d-3f29-452c-bab8-3604f9775340 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:09:53.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2983" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":283,"completed":115,"skipped":1809,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-5d7e4c01-bc43-492e-ae16-a8b5a7d7f345
STEP: Creating a pod to test consume configMaps
Mar 31 19:09:54.051: INFO: Waiting up to 5m0s for pod "pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118" in namespace "configmap-365" to be "Succeeded or Failed"
Mar 31 19:09:54.081: INFO: Pod "pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118": Phase="Pending", Reason="", readiness=false. Elapsed: 30.434542ms
Mar 31 19:09:56.112: INFO: Pod "pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061132963s
STEP: Saw pod success
Mar 31 19:09:56.112: INFO: Pod "pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118" satisfied condition "Succeeded or Failed"
Mar 31 19:09:56.142: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118 container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:09:56.221: INFO: Waiting for pod pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118 to disappear
Mar 31 19:09:56.251: INFO: Pod pod-configmaps-f5a2a2db-4e0c-46f6-894c-2e8a3e721118 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:09:56.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-365" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":116,"skipped":1831,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:10:12.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4440" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":283,"completed":117,"skipped":1846,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-cc597a3b-4d37-49c9-b258-e435b4062dde
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:10:17.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7957" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":118,"skipped":1847,"failed":0}

------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-4m5l
STEP: Creating a pod to test atomic-volume-subpath
Mar 31 19:10:18.091: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4m5l" in namespace "subpath-1462" to be "Succeeded or Failed"
Mar 31 19:10:18.122: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Pending", Reason="", readiness=false. Elapsed: 30.402105ms
Mar 31 19:10:20.152: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 2.0605209s
Mar 31 19:10:22.188: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 4.096162561s
Mar 31 19:10:24.219: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 6.127815745s
Mar 31 19:10:26.250: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 8.158286174s
Mar 31 19:10:28.280: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 10.188572504s
Mar 31 19:10:30.310: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 12.218855298s
Mar 31 19:10:32.341: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 14.249105116s
Mar 31 19:10:34.372: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 16.280496743s
Mar 31 19:10:36.403: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 18.31126888s
Mar 31 19:10:38.434: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Running", Reason="", readiness=true. Elapsed: 20.342357929s
Mar 31 19:10:40.464: INFO: Pod "pod-subpath-test-downwardapi-4m5l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.372351789s
STEP: Saw pod success
Mar 31 19:10:40.464: INFO: Pod "pod-subpath-test-downwardapi-4m5l" satisfied condition "Succeeded or Failed"
Mar 31 19:10:40.493: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-subpath-test-downwardapi-4m5l container test-container-subpath-downwardapi-4m5l: <nil>
STEP: delete the pod
Mar 31 19:10:40.568: INFO: Waiting for pod pod-subpath-test-downwardapi-4m5l to disappear
Mar 31 19:10:40.598: INFO: Pod pod-subpath-test-downwardapi-4m5l no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-4m5l
Mar 31 19:10:40.598: INFO: Deleting pod "pod-subpath-test-downwardapi-4m5l" in namespace "subpath-1462"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 31 19:10:40.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1462" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":283,"completed":119,"skipped":1847,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 34 lines ...
Mar 31 19:12:52.804: INFO: Deleting pod "var-expansion-a6db42a8-e148-4c8a-a92f-499a09fb0c82" in namespace "var-expansion-4343"
Mar 31 19:12:52.841: INFO: Wait up to 5m0s for pod "var-expansion-a6db42a8-e148-4c8a-a92f-499a09fb0c82" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 19:13:36.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4343" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":283,"completed":120,"skipped":1864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:13:37.170: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5" in namespace "projected-799" to be "Succeeded or Failed"
Mar 31 19:13:37.203: INFO: Pod "downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.997483ms
Mar 31 19:13:39.234: INFO: Pod "downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063385568s
STEP: Saw pod success
Mar 31 19:13:39.234: INFO: Pod "downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5" satisfied condition "Succeeded or Failed"
Mar 31 19:13:39.264: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5 container client-container: <nil>
STEP: delete the pod
Mar 31 19:13:39.359: INFO: Waiting for pod downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5 to disappear
Mar 31 19:13:39.389: INFO: Pod downwardapi-volume-af03608d-e530-4b83-a38a-790b23ffdde5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:13:39.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-799" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":121,"skipped":1923,"failed":0}
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 19:13:39.480: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap that has name configmap-test-emptyKey-44e689ae-7ace-49e6-bdea-07a22be8eddd
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:13:39.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1872" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":283,"completed":122,"skipped":1930,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 7 lines ...
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 31 19:13:42.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8660" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":123,"skipped":1936,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-62789a75-6474-43af-bfc3-e997604a54ab
STEP: Creating a pod to test consume secrets
Mar 31 19:13:42.326: INFO: Waiting up to 5m0s for pod "pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d" in namespace "secrets-6356" to be "Succeeded or Failed"
Mar 31 19:13:42.356: INFO: Pod "pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.151406ms
Mar 31 19:13:44.387: INFO: Pod "pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060784254s
STEP: Saw pod success
Mar 31 19:13:44.387: INFO: Pod "pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d" satisfied condition "Succeeded or Failed"
Mar 31 19:13:44.416: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 19:13:44.514: INFO: Waiting for pod pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d to disappear
Mar 31 19:13:44.544: INFO: Pod pod-secrets-f6e9e035-8c36-4bf7-a025-ccea8eafce0d no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:13:44.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6356" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":124,"skipped":1943,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 31 19:13:45.039: INFO: stderr: ""
Mar 31 19:13:45.039: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.177+7b621d82d5d212\", GitCommit:\"7b621d82d5d2120f67c5d3b731f23564088e74ab\", GitTreeState:\"clean\", BuildDate:\"2020-02-11T14:24:02Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:13:45.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9501" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":283,"completed":125,"skipped":1977,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 8 lines ...
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:13:52.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-368" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":283,"completed":126,"skipped":1980,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 121 lines ...
Mar 31 19:14:10.293: INFO: stderr: ""
Mar 31 19:14:10.293: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:14:10.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6062" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":283,"completed":127,"skipped":1991,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:14:10.401: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 31 19:14:10.572: INFO: Waiting up to 5m0s for pod "pod-2c643246-0769-4cd8-958a-99e66060157e" in namespace "emptydir-6989" to be "Succeeded or Failed"
Mar 31 19:14:10.602: INFO: Pod "pod-2c643246-0769-4cd8-958a-99e66060157e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.231381ms
Mar 31 19:14:12.633: INFO: Pod "pod-2c643246-0769-4cd8-958a-99e66060157e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060790242s
STEP: Saw pod success
Mar 31 19:14:12.633: INFO: Pod "pod-2c643246-0769-4cd8-958a-99e66060157e" satisfied condition "Succeeded or Failed"
Mar 31 19:14:12.663: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-2c643246-0769-4cd8-958a-99e66060157e container test-container: <nil>
STEP: delete the pod
Mar 31 19:14:12.740: INFO: Waiting for pod pod-2c643246-0769-4cd8-958a-99e66060157e to disappear
Mar 31 19:14:12.771: INFO: Pod pod-2c643246-0769-4cd8-958a-99e66060157e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:14:12.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6989" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":128,"skipped":2008,"failed":0}

------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 17 lines ...
Mar 31 19:16:39.331: INFO: Restart count of pod container-probe-7343/liveness-f5354999-ccd2-4fc0-8647-941030c9ac1d is now 5 (2m24.210052142s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:16:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7343" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":283,"completed":129,"skipped":2008,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:16:39.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e" in namespace "projected-2535" to be "Succeeded or Failed"
Mar 31 19:16:39.677: INFO: Pod "downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.030286ms
Mar 31 19:16:41.707: INFO: Pod "downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059797399s
STEP: Saw pod success
Mar 31 19:16:41.707: INFO: Pod "downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e" satisfied condition "Succeeded or Failed"
Mar 31 19:16:41.737: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e container client-container: <nil>
STEP: delete the pod
Mar 31 19:16:41.823: INFO: Waiting for pod downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e to disappear
Mar 31 19:16:41.854: INFO: Pod downwardapi-volume-81e34f57-3ddd-444f-a901-9df329e8ee9e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:16:41.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2535" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":130,"skipped":2014,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Mar 31 19:16:42.305: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5625 /api/v1/namespaces/watch-5625/configmaps/e2e-watch-test-resource-version 56693d1f-6864-4f3d-b52d-803e79f6606e 16796 0 2020-03-31 19:16:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 31 19:16:42.305: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5625 /api/v1/namespaces/watch-5625/configmaps/e2e-watch-test-resource-version 56693d1f-6864-4f3d-b52d-803e79f6606e 16797 0 2020-03-31 19:16:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 31 19:16:42.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5625" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":283,"completed":131,"skipped":2024,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 26 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 19:16:50.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6656" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":283,"completed":132,"skipped":2032,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 31 19:17:43.855: INFO: Restart count of pod container-probe-1520/busybox-e17ee250-1b95-463e-93fb-8724de8e2f75 is now 1 (50.793805976s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:17:43.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1520" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":133,"skipped":2049,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-286a410a-0881-471c-af3d-4a57f1d5387a
STEP: Creating a pod to test consume secrets
Mar 31 19:17:44.199: INFO: Waiting up to 5m0s for pod "pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb" in namespace "secrets-6001" to be "Succeeded or Failed"
Mar 31 19:17:44.230: INFO: Pod "pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.051377ms
Mar 31 19:17:46.260: INFO: Pod "pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061038188s
STEP: Saw pod success
Mar 31 19:17:46.260: INFO: Pod "pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb" satisfied condition "Succeeded or Failed"
Mar 31 19:17:46.290: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 19:17:46.378: INFO: Waiting for pod pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb to disappear
Mar 31 19:17:46.410: INFO: Pod pod-secrets-4f0263f1-397e-4aa8-b2bc-9f73c7cac2cb no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:17:46.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6001" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":134,"skipped":2070,"failed":0}

------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 31 19:17:48.759: INFO: Initial restart count of pod liveness-710c27f7-e284-41a6-a66e-91dd8942f425 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:21:50.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8565" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":283,"completed":135,"skipped":2070,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:21:50.698: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f" in namespace "downward-api-4914" to be "Succeeded or Failed"
Mar 31 19:21:50.733: INFO: Pod "downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.054713ms
Mar 31 19:21:52.763: INFO: Pod "downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064733656s
STEP: Saw pod success
Mar 31 19:21:52.763: INFO: Pod "downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f" satisfied condition "Succeeded or Failed"
Mar 31 19:21:52.793: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f container client-container: <nil>
STEP: delete the pod
Mar 31 19:21:52.883: INFO: Waiting for pod downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f to disappear
Mar 31 19:21:52.914: INFO: Pod downwardapi-volume-2625e69e-4706-47d7-88d5-13dd5cd65c3f no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:21:52.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4914" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":283,"completed":136,"skipped":2078,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:21:53.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889" in namespace "projected-3542" to be "Succeeded or Failed"
Mar 31 19:21:53.208: INFO: Pod "downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889": Phase="Pending", Reason="", readiness=false. Elapsed: 29.845931ms
Mar 31 19:21:55.239: INFO: Pod "downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060268038s
STEP: Saw pod success
Mar 31 19:21:55.239: INFO: Pod "downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889" satisfied condition "Succeeded or Failed"
Mar 31 19:21:55.268: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889 container client-container: <nil>
STEP: delete the pod
Mar 31 19:21:55.360: INFO: Waiting for pod downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889 to disappear
Mar 31 19:21:55.392: INFO: Pod downwardapi-volume-5942016e-59bd-4c29-94f0-3e8bd1c8d889 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:21:55.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3542" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":137,"skipped":2118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-vrqv
STEP: Creating a pod to test atomic-volume-subpath
Mar 31 19:21:55.714: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vrqv" in namespace "subpath-5321" to be "Succeeded or Failed"
Mar 31 19:21:55.750: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Pending", Reason="", readiness=false. Elapsed: 36.050979ms
Mar 31 19:21:57.780: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 2.066683497s
Mar 31 19:21:59.811: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 4.096976497s
Mar 31 19:22:01.841: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 6.127591856s
Mar 31 19:22:03.872: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 8.158392646s
Mar 31 19:22:05.902: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 10.188591791s
Mar 31 19:22:07.933: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 12.219464808s
Mar 31 19:22:09.965: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 14.25117281s
Mar 31 19:22:11.997: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 16.283263123s
Mar 31 19:22:14.027: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 18.313366023s
Mar 31 19:22:16.057: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Running", Reason="", readiness=true. Elapsed: 20.343367688s
Mar 31 19:22:18.089: INFO: Pod "pod-subpath-test-configmap-vrqv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.37478776s
STEP: Saw pod success
Mar 31 19:22:18.089: INFO: Pod "pod-subpath-test-configmap-vrqv" satisfied condition "Succeeded or Failed"
Mar 31 19:22:18.119: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-subpath-test-configmap-vrqv container test-container-subpath-configmap-vrqv: <nil>
STEP: delete the pod
Mar 31 19:22:18.195: INFO: Waiting for pod pod-subpath-test-configmap-vrqv to disappear
Mar 31 19:22:18.225: INFO: Pod pod-subpath-test-configmap-vrqv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vrqv
Mar 31 19:22:18.225: INFO: Deleting pod "pod-subpath-test-configmap-vrqv" in namespace "subpath-5321"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 31 19:22:18.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5321" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":283,"completed":138,"skipped":2148,"failed":0}

------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 7 lines ...
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:23:18.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9479" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":283,"completed":139,"skipped":2148,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:23:22.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-12" for this suite.
STEP: Destroying namespace "webhook-12-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":283,"completed":140,"skipped":2171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-eb051385-8b82-40db-ab30-214f1f847815
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:23:27.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7398" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":141,"skipped":2199,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Mar 31 19:23:28.416: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8810 /api/v1/namespaces/watch-8810/configmaps/e2e-watch-test-watch-closed cc7dc5e4-9859-4820-b105-b518831c0fff 18006 0 2020-03-31 19:23:28 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 31 19:23:28.416: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8810 /api/v1/namespaces/watch-8810/configmaps/e2e-watch-test-watch-closed cc7dc5e4-9859-4820-b105-b518831c0fff 18007 0 2020-03-31 19:23:28 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 31 19:23:28.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8810" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":283,"completed":142,"skipped":2201,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 18 lines ...
STEP: Deleting second CR
Mar 31 19:24:19.120: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-31T19:23:38Z generation:2 name:name2 resourceVersion:18157 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4d497d84-ae59-46d9-9d34-96411ae5dcef] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:24:29.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2703" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":283,"completed":143,"skipped":2226,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:24:29.991: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730" in namespace "downward-api-3436" to be "Succeeded or Failed"
Mar 31 19:24:30.023: INFO: Pod "downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730": Phase="Pending", Reason="", readiness=false. Elapsed: 31.858646ms
Mar 31 19:24:32.053: INFO: Pod "downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062228344s
STEP: Saw pod success
Mar 31 19:24:32.053: INFO: Pod "downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730" satisfied condition "Succeeded or Failed"
Mar 31 19:24:32.083: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730 container client-container: <nil>
STEP: delete the pod
Mar 31 19:24:32.163: INFO: Waiting for pod downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730 to disappear
Mar 31 19:24:32.193: INFO: Pod downwardapi-volume-1c30467b-812c-40f2-ae48-5962c5034730 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:24:32.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3436" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":283,"completed":144,"skipped":2229,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 31 19:24:33.556: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 31 19:24:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-533" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":145,"skipped":2305,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
Mar 31 19:24:37.152: INFO: Pod "adopt-release-5pg4v": Phase="Running", Reason="", readiness=true. Elapsed: 31.783101ms
Mar 31 19:24:37.152: INFO: Pod "adopt-release-5pg4v" satisfied condition "released"
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Mar 31 19:24:37.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-285" for this suite.
•{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":283,"completed":146,"skipped":2331,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 43 lines ...
Mar 31 19:24:56.390: INFO: Pod "test-rollover-deployment-78df7bc796-s4mrm" is available:
&Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-s4mrm test-rollover-deployment-78df7bc796- deployment-398 /api/v1/namespaces/deployment-398/pods/test-rollover-deployment-78df7bc796-s4mrm 9c7d488a-be1b-48ad-a3f6-6eb5b1e80d10 18413 0 2020-03-31 19:24:44 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78df7bc796] map[cni.projectcalico.org/podIP:192.168.234.200/32] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 7a73840b-308d-45e3-a794-f52dace47801 0xc000542607 0xc000542608}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pcgbz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pcgbz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pcgbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:24:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:24:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:24:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:24:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.5,PodIP:192.168.234.200,StartTime:2020-03-31 19:24:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-31 19:24:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d904daadb4111a4f0b029ddb836121c4e5fffe992254cc12cdb3daa760195c40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.234.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 31 19:24:56.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-398" for this suite.
•{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":283,"completed":147,"skipped":2346,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] HostPath
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test hostPath mode
Mar 31 19:24:56.659: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7824" to be "Succeeded or Failed"
Mar 31 19:24:56.688: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 29.336743ms
Mar 31 19:24:58.719: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060501325s
STEP: Saw pod success
Mar 31 19:24:58.719: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 31 19:24:58.749: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Mar 31 19:24:58.825: INFO: Waiting for pod pod-host-path-test to disappear
Mar 31 19:24:58.856: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:175
Mar 31 19:24:58.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7824" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":148,"skipped":2356,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 190 lines ...
Mar 31 19:25:18.648: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 31 19:25:18.648: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:25:18.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5620" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":283,"completed":149,"skipped":2358,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 31 19:25:18.883: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:25:22.741: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:25:36.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7847" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":283,"completed":150,"skipped":2400,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:175
Mar 31 19:25:36.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9612" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":283,"completed":151,"skipped":2416,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:25:37.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-537" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":283,"completed":152,"skipped":2431,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-b1905287-ebcd-4669-9c9b-e009f1d54037
STEP: Creating a pod to test consume configMaps
Mar 31 19:25:37.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec" in namespace "configmap-3012" to be "Succeeded or Failed"
Mar 31 19:25:37.551: INFO: Pod "pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec": Phase="Pending", Reason="", readiness=false. Elapsed: 30.846505ms
Mar 31 19:25:39.582: INFO: Pod "pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061706022s
STEP: Saw pod success
Mar 31 19:25:39.582: INFO: Pod "pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec" satisfied condition "Succeeded or Failed"
Mar 31 19:25:39.612: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:25:39.697: INFO: Waiting for pod pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec to disappear
Mar 31 19:25:39.728: INFO: Pod pod-configmaps-3e50ca0a-4bc7-4b5e-86d1-08e0ae49a8ec no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:25:39.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3012" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":153,"skipped":2432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
Mar 31 19:25:55.414: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9620/pods","resourceVersion":"19060"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:25:55.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9620" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":283,"completed":154,"skipped":2518,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 9 lines ...
STEP: creating pod
Mar 31 19:25:57.908: INFO: Pod pod-hostip-fb12a6a9-2656-4034-8dbe-996f6caadd4d has hostIP: 10.150.0.5
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 19:25:57.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8709" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":283,"completed":155,"skipped":2538,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Mar 31 19:26:16.876: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:26:17.142: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 31 19:26:17.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8883" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":283,"completed":156,"skipped":2549,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 31 19:26:17.236: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 31 19:26:17.411: INFO: Waiting up to 5m0s for pod "downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5" in namespace "downward-api-4820" to be "Succeeded or Failed"
Mar 31 19:26:17.442: INFO: Pod "downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5": Phase="Pending", Reason="", readiness=false. Elapsed: 31.340786ms
Mar 31 19:26:19.473: INFO: Pod "downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061615525s
STEP: Saw pod success
Mar 31 19:26:19.473: INFO: Pod "downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5" satisfied condition "Succeeded or Failed"
Mar 31 19:26:19.503: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5 container dapi-container: <nil>
STEP: delete the pod
Mar 31 19:26:19.582: INFO: Waiting for pod downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5 to disappear
Mar 31 19:26:19.614: INFO: Pod downward-api-00243d15-7df1-4e0f-93e5-2ebd5d6f77c5 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 31 19:26:19.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4820" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":283,"completed":157,"skipped":2566,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:26:24.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-148" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/crd_conversion_webhook.go:137
•{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":283,"completed":158,"skipped":2566,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Mar 31 19:26:25.644: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 31 19:26:28.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8188" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":283,"completed":159,"skipped":2569,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:26:29.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629" in namespace "projected-1521" to be "Succeeded or Failed"
Mar 31 19:26:29.046: INFO: Pod "downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629": Phase="Pending", Reason="", readiness=false. Elapsed: 29.790751ms
Mar 31 19:26:31.076: INFO: Pod "downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060208374s
STEP: Saw pod success
Mar 31 19:26:31.076: INFO: Pod "downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629" satisfied condition "Succeeded or Failed"
Mar 31 19:26:31.106: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629 container client-container: <nil>
STEP: delete the pod
Mar 31 19:26:31.192: INFO: Waiting for pod downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629 to disappear
Mar 31 19:26:31.223: INFO: Pod downwardapi-volume-7ed786c5-9d3a-42dc-a037-1eb22ec39629 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:26:31.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1521" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":160,"skipped":2607,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:26:35.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1536" for this suite.
STEP: Destroying namespace "webhook-1536-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":283,"completed":161,"skipped":2623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 31 19:26:38.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6537" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":283,"completed":162,"skipped":2679,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 32 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:26:43.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6153" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":283,"completed":163,"skipped":2693,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:26:43.174: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 31 19:26:43.348: INFO: Waiting up to 5m0s for pod "pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd" in namespace "emptydir-1814" to be "Succeeded or Failed"
Mar 31 19:26:43.378: INFO: Pod "pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.75175ms
Mar 31 19:26:45.408: INFO: Pod "pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059833473s
STEP: Saw pod success
Mar 31 19:26:45.408: INFO: Pod "pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd" satisfied condition "Succeeded or Failed"
Mar 31 19:26:45.438: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd container test-container: <nil>
STEP: delete the pod
Mar 31 19:26:45.516: INFO: Waiting for pod pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd to disappear
Mar 31 19:26:45.547: INFO: Pod pod-e78e19c1-2e35-4e0f-a4e8-707e71fbddcd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:26:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1814" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":164,"skipped":2695,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:26:45.636: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Mar 31 19:26:45.803: INFO: Waiting up to 5m0s for pod "pod-9ce7da42-8375-4c97-95b0-d9accda40973" in namespace "emptydir-350" to be "Succeeded or Failed"
Mar 31 19:26:45.832: INFO: Pod "pod-9ce7da42-8375-4c97-95b0-d9accda40973": Phase="Pending", Reason="", readiness=false. Elapsed: 29.116365ms
Mar 31 19:26:47.861: INFO: Pod "pod-9ce7da42-8375-4c97-95b0-d9accda40973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058843333s
STEP: Saw pod success
Mar 31 19:26:47.861: INFO: Pod "pod-9ce7da42-8375-4c97-95b0-d9accda40973" satisfied condition "Succeeded or Failed"
Mar 31 19:26:47.891: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-9ce7da42-8375-4c97-95b0-d9accda40973 container test-container: <nil>
STEP: delete the pod
Mar 31 19:26:47.966: INFO: Waiting for pod pod-9ce7da42-8375-4c97-95b0-d9accda40973 to disappear
Mar 31 19:26:47.995: INFO: Pod pod-9ce7da42-8375-4c97-95b0-d9accda40973 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:26:47.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-350" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":165,"skipped":2708,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-c48a0f51-c7d6-41c2-9fae-4246d2237c6b
STEP: Creating a pod to test consume configMaps
Mar 31 19:26:48.287: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a" in namespace "projected-3559" to be "Succeeded or Failed"
Mar 31 19:26:48.319: INFO: Pod "pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.13952ms
Mar 31 19:26:50.349: INFO: Pod "pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062461478s
STEP: Saw pod success
Mar 31 19:26:50.349: INFO: Pod "pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a" satisfied condition "Succeeded or Failed"
Mar 31 19:26:50.378: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:26:50.454: INFO: Waiting for pod pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a to disappear
Mar 31 19:26:50.484: INFO: Pod pod-projected-configmaps-2e6cc707-4a57-4d88-a7ae-4d4ae631c03a no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:26:50.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3559" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":283,"completed":166,"skipped":2748,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] version v1
... skipping 105 lines ...
<a href="btmp">btmp</a>
<a href="ch... (200; 33.335726ms)
[AfterEach] version v1
  test/e2e/framework/framework.go:175
Mar 31 19:26:51.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6064" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":283,"completed":167,"skipped":2757,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 69 lines ...
Mar 31 19:29:15.038: INFO: Waiting for statefulset status.replicas updated to 0
Mar 31 19:29:15.068: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 31 19:29:15.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4082" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":283,"completed":168,"skipped":2780,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 46 lines ...
Mar 31 19:29:35.189: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2345/pods","resourceVersion":"20517"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:29:35.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2345" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":283,"completed":169,"skipped":2797,"failed":0}
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-25ddaedb-28a0-4b9e-8410-a4384fc4e6fc
STEP: Creating secret with name secret-projected-all-test-volume-6e431b35-0a2b-49e4-8ac8-bfafdb7e8099
STEP: Creating a pod to test Check all projections for projected volume plugin
Mar 31 19:29:35.650: INFO: Waiting up to 5m0s for pod "projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a" in namespace "projected-7962" to be "Succeeded or Failed"
Mar 31 19:29:35.681: INFO: Pod "projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.789426ms
Mar 31 19:29:37.714: INFO: Pod "projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063603565s
STEP: Saw pod success
Mar 31 19:29:37.714: INFO: Pod "projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a" satisfied condition "Succeeded or Failed"
Mar 31 19:29:37.744: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a container projected-all-volume-test: <nil>
STEP: delete the pod
Mar 31 19:29:37.843: INFO: Waiting for pod projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a to disappear
Mar 31 19:29:37.873: INFO: Pod projected-volume-1433969c-19a8-419e-a333-1f7ef117ae6a no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
Mar 31 19:29:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7962" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":283,"completed":170,"skipped":2801,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 9 lines ...
STEP: Creating the pod
Mar 31 19:29:40.862: INFO: Successfully updated pod "annotationupdate2ff13600-8a08-4c0c-a9b2-cf9ed0c4bae2"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:29:44.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6146" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":283,"completed":171,"skipped":2811,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
STEP: Updating configmap projected-configmap-test-upd-323ae742-6592-49f1-bedd-3d93689c5f41
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:29:51.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6202" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":172,"skipped":2818,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 31 19:29:51.665: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 31 19:29:51.835: INFO: Waiting up to 5m0s for pod "downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5" in namespace "downward-api-2460" to be "Succeeded or Failed"
Mar 31 19:29:51.866: INFO: Pod "downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.884315ms
Mar 31 19:29:53.897: INFO: Pod "downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062042953s
STEP: Saw pod success
Mar 31 19:29:53.897: INFO: Pod "downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5" satisfied condition "Succeeded or Failed"
Mar 31 19:29:53.927: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5 container dapi-container: <nil>
STEP: delete the pod
Mar 31 19:29:54.001: INFO: Waiting for pod downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5 to disappear
Mar 31 19:29:54.032: INFO: Pod downward-api-f381ac1c-1367-4ce0-b004-7e628d3fc9d5 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 31 19:29:54.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2460" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":283,"completed":173,"skipped":2837,"failed":0}
SS
------------------------------
[k8s.io] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 19:29:54.123: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 19:31:54.376: INFO: Deleting pod "var-expansion-484b48b0-78e4-42cf-8f88-52999e5f4685" in namespace "var-expansion-3996"
Mar 31 19:31:54.411: INFO: Wait up to 5m0s for pod "var-expansion-484b48b0-78e4-42cf-8f88-52999e5f4685" to be fully deleted
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 19:31:56.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3996" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":283,"completed":174,"skipped":2839,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-z255
STEP: Creating a pod to test atomic-volume-subpath
Mar 31 19:31:56.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-z255" in namespace "subpath-8593" to be "Succeeded or Failed"
Mar 31 19:31:56.833: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Pending", Reason="", readiness=false. Elapsed: 35.598993ms
Mar 31 19:31:58.863: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 2.065515483s
Mar 31 19:32:00.901: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 4.103054013s
Mar 31 19:32:02.931: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 6.133177139s
Mar 31 19:32:04.961: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 8.163471851s
Mar 31 19:32:06.992: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 10.194310072s
Mar 31 19:32:09.022: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 12.224794812s
Mar 31 19:32:11.053: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 14.254955494s
Mar 31 19:32:13.085: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 16.287659388s
Mar 31 19:32:15.116: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 18.318823364s
Mar 31 19:32:17.147: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Running", Reason="", readiness=true. Elapsed: 20.349877681s
Mar 31 19:32:19.178: INFO: Pod "pod-subpath-test-configmap-z255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.379936322s
STEP: Saw pod success
Mar 31 19:32:19.178: INFO: Pod "pod-subpath-test-configmap-z255" satisfied condition "Succeeded or Failed"
Mar 31 19:32:19.207: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-subpath-test-configmap-z255 container test-container-subpath-configmap-z255: <nil>
STEP: delete the pod
Mar 31 19:32:19.294: INFO: Waiting for pod pod-subpath-test-configmap-z255 to disappear
Mar 31 19:32:19.325: INFO: Pod pod-subpath-test-configmap-z255 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-z255
Mar 31 19:32:19.325: INFO: Deleting pod "pod-subpath-test-configmap-z255" in namespace "subpath-8593"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 31 19:32:19.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8593" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":283,"completed":175,"skipped":2877,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 31 19:32:19.579: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:32:24.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3854" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":283,"completed":176,"skipped":2878,"failed":0}

------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Mar 31 19:32:27.376: INFO: Pod "test-recreate-deployment-5f94c574ff-hc2wp" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-hc2wp test-recreate-deployment-5f94c574ff- deployment-4056 /api/v1/namespaces/deployment-4056/pods/test-recreate-deployment-5f94c574ff-hc2wp b8f226ab-8e5f-4f99-87e7-657ec85eae7c 21172 0 2020-03-31 19:32:27 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff fb2b2304-3e0e-43f8-a013-4a7655b0326c 0xc002232187 0xc002232188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v6mww,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v6mww,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v6mww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:32:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:32:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:32:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-31 19:32:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.150.0.3,PodIP:,StartTime:2020-03-31 19:32:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:175
Mar 31 19:32:27.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4056" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":283,"completed":177,"skipped":2878,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-08cc399f-076f-43ed-8f19-a6e06cc19408
STEP: Creating a pod to test consume secrets
Mar 31 19:32:27.665: INFO: Waiting up to 5m0s for pod "pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655" in namespace "secrets-627" to be "Succeeded or Failed"
Mar 31 19:32:27.697: INFO: Pod "pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655": Phase="Pending", Reason="", readiness=false. Elapsed: 31.951355ms
Mar 31 19:32:29.727: INFO: Pod "pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061990712s
STEP: Saw pod success
Mar 31 19:32:29.727: INFO: Pod "pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655" satisfied condition "Succeeded or Failed"
Mar 31 19:32:29.757: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655 container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 19:32:29.851: INFO: Waiting for pod pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655 to disappear
Mar 31 19:32:29.880: INFO: Pod pod-secrets-6963354c-3441-4cf8-8af3-4ffae6bd0655 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:32:29.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-627" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":283,"completed":178,"skipped":2945,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:32:43.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5487" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":283,"completed":179,"skipped":2951,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Mar 31 19:32:46.053: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:32:46.322: INFO: Deleting pod dns-2561...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 19:32:46.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2561" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":283,"completed":180,"skipped":2956,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicaSet
... skipping 11 lines ...
Mar 31 19:32:48.815: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:175
Mar 31 19:32:48.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6190" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":283,"completed":181,"skipped":2966,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-744f2757-7857-4379-9b99-7dd79fc2c26d
STEP: Creating a pod to test consume secrets
Mar 31 19:32:49.218: INFO: Waiting up to 5m0s for pod "pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38" in namespace "secrets-2717" to be "Succeeded or Failed"
Mar 31 19:32:49.250: INFO: Pod "pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38": Phase="Pending", Reason="", readiness=false. Elapsed: 31.621448ms
Mar 31 19:32:51.286: INFO: Pod "pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067911577s
STEP: Saw pod success
Mar 31 19:32:51.286: INFO: Pod "pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38" satisfied condition "Succeeded or Failed"
Mar 31 19:32:51.316: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38 container secret-env-test: <nil>
STEP: delete the pod
Mar 31 19:32:51.395: INFO: Waiting for pod pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38 to disappear
Mar 31 19:32:51.425: INFO: Pod pod-secrets-80623701-71d1-4283-8722-8bbeed6ebb38 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:32:51.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2717" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":283,"completed":182,"skipped":2978,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 11 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 31 19:32:54.381: INFO: Successfully updated pod "pod-update-activedeadlineseconds-080765b1-a842-4e2f-9dce-0a0ff0e31d33"
Mar 31 19:32:54.381: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-080765b1-a842-4e2f-9dce-0a0ff0e31d33" in namespace "pods-274" to be "terminated due to deadline exceeded"
Mar 31 19:32:54.411: INFO: Pod "pod-update-activedeadlineseconds-080765b1-a842-4e2f-9dce-0a0ff0e31d33": Phase="Running", Reason="", readiness=true. Elapsed: 29.788246ms
Mar 31 19:32:56.441: INFO: Pod "pod-update-activedeadlineseconds-080765b1-a842-4e2f-9dce-0a0ff0e31d33": Phase="Running", Reason="", readiness=true. Elapsed: 2.059810855s
Mar 31 19:32:58.471: INFO: Pod "pod-update-activedeadlineseconds-080765b1-a842-4e2f-9dce-0a0ff0e31d33": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.090226073s
Mar 31 19:32:58.471: INFO: Pod "pod-update-activedeadlineseconds-080765b1-a842-4e2f-9dce-0a0ff0e31d33" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 19:32:58.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-274" for this suite.
•{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":283,"completed":183,"skipped":2990,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1743
STEP: Creating statefulset with conflicting port in namespace statefulset-1743
STEP: Waiting until pod test-pod will start running in namespace statefulset-1743
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1743
Mar 31 19:33:02.920: INFO: Observed stateful pod in namespace: statefulset-1743, name: ss-0, uid: ddde96db-e540-473a-992d-330c3f955978, status phase: Pending. Waiting for statefulset controller to delete.
Mar 31 19:33:03.203: INFO: Observed stateful pod in namespace: statefulset-1743, name: ss-0, uid: ddde96db-e540-473a-992d-330c3f955978, status phase: Failed. Waiting for statefulset controller to delete.
Mar 31 19:33:03.218: INFO: Observed stateful pod in namespace: statefulset-1743, name: ss-0, uid: ddde96db-e540-473a-992d-330c3f955978, status phase: Failed. Waiting for statefulset controller to delete.
Mar 31 19:33:03.223: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1743
STEP: Removing pod with conflicting port in namespace statefulset-1743
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1743 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:110
Mar 31 19:33:07.363: INFO: Deleting all statefulset in ns statefulset-1743
Mar 31 19:33:07.395: INFO: Scaling statefulset ss to 0
Mar 31 19:33:17.525: INFO: Waiting for statefulset status.replicas updated to 0
Mar 31 19:33:17.554: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 31 19:33:17.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1743" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":283,"completed":184,"skipped":3028,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Mar 31 19:33:58.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0331 19:33:58.111616   25441 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
STEP: Destroying namespace "gc-1841" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":283,"completed":185,"skipped":3031,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 8 lines ...
Mar 31 19:33:58.316: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:34:01.626: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:34:15.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-457" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":283,"completed":186,"skipped":3037,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:597
[AfterEach] [k8s.io] Lease
  test/e2e/framework/framework.go:175
Mar 31 19:34:16.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-7057" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":283,"completed":187,"skipped":3107,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 11 lines ...
Mar 31 19:34:16.455: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 31 19:34:16.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9935" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":283,"completed":188,"skipped":3132,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 16 lines ...
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 19:34:25.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-180" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":283,"completed":189,"skipped":3171,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-57b59c05-49a2-4a54-82b8-ad73866b11e9
STEP: Creating a pod to test consume secrets
Mar 31 19:34:25.557: INFO: Waiting up to 5m0s for pod "pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8" in namespace "secrets-5624" to be "Succeeded or Failed"
Mar 31 19:34:25.587: INFO: Pod "pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.837352ms
Mar 31 19:34:27.617: INFO: Pod "pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059927727s
STEP: Saw pod success
Mar 31 19:34:27.617: INFO: Pod "pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8" satisfied condition "Succeeded or Failed"
Mar 31 19:34:27.647: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8 container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 19:34:27.734: INFO: Waiting for pod pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8 to disappear
Mar 31 19:34:27.765: INFO: Pod pod-secrets-171751ed-b093-4d26-b749-148967e2d2f8 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:34:27.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5624" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":190,"skipped":3181,"failed":0}

------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 28 lines ...
Mar 31 19:34:51.875: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:34:53.117: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 31 19:34:53.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5237" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":191,"skipped":3181,"failed":0}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Mar 31 19:34:57.042: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl exec --namespace=svcaccounts-4699 pod-service-account-112d8e82-0dae-482b-a479-82b12f30fddb -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:175
Mar 31 19:34:57.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4699" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":283,"completed":192,"skipped":3189,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
... skipping 2 lines ...
Mar 31 19:34:57.631: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Mar 31 19:34:57.807: INFO: Waiting up to 5m0s for pod "var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481" in namespace "var-expansion-6224" to be "Succeeded or Failed"
Mar 31 19:34:57.839: INFO: Pod "var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481": Phase="Pending", Reason="", readiness=false. Elapsed: 32.352983ms
Mar 31 19:34:59.870: INFO: Pod "var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063280248s
STEP: Saw pod success
Mar 31 19:34:59.870: INFO: Pod "var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481" satisfied condition "Succeeded or Failed"
Mar 31 19:34:59.900: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481 container dapi-container: <nil>
STEP: delete the pod
Mar 31 19:34:59.980: INFO: Waiting for pod var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481 to disappear
Mar 31 19:35:00.012: INFO: Pod var-expansion-ecd7b215-b611-4dcd-a918-7a4c9cbbd481 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
Mar 31 19:35:00.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6224" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":283,"completed":193,"skipped":3217,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 31 19:35:05.548: INFO: stderr: ""
Mar 31 19:35:05.548: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5343-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:35:08.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4401" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":283,"completed":194,"skipped":3265,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:35:13.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6321" for this suite.
STEP: Destroying namespace "webhook-6321-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":283,"completed":195,"skipped":3266,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:35:20.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3755" for this suite.
STEP: Destroying namespace "nsdeletetest-9803" for this suite.
Mar 31 19:35:20.336: INFO: Namespace nsdeletetest-9803 was already deleted
STEP: Destroying namespace "nsdeletetest-9438" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":283,"completed":196,"skipped":3270,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 31 19:35:20.373: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 31 19:35:20.538: INFO: Waiting up to 5m0s for pod "downward-api-5be32374-b336-439e-aa83-4f42b2705706" in namespace "downward-api-3923" to be "Succeeded or Failed"
Mar 31 19:35:20.571: INFO: Pod "downward-api-5be32374-b336-439e-aa83-4f42b2705706": Phase="Pending", Reason="", readiness=false. Elapsed: 32.814183ms
Mar 31 19:35:22.601: INFO: Pod "downward-api-5be32374-b336-439e-aa83-4f42b2705706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063432892s
STEP: Saw pod success
Mar 31 19:35:22.601: INFO: Pod "downward-api-5be32374-b336-439e-aa83-4f42b2705706" satisfied condition "Succeeded or Failed"
Mar 31 19:35:22.631: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downward-api-5be32374-b336-439e-aa83-4f42b2705706 container dapi-container: <nil>
STEP: delete the pod
Mar 31 19:35:22.714: INFO: Waiting for pod downward-api-5be32374-b336-439e-aa83-4f42b2705706 to disappear
Mar 31 19:35:22.745: INFO: Pod downward-api-5be32374-b336-439e-aa83-4f42b2705706 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 31 19:35:22.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3923" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":283,"completed":197,"skipped":3283,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 31 19:35:22.965: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:35:23.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3829" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":283,"completed":198,"skipped":3298,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 11 lines ...
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:35:45.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8013" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":283,"completed":199,"skipped":3304,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-2f8c8c57-d8ed-4c6b-8209-44d31cadfaad
STEP: Creating a pod to test consume configMaps
Mar 31 19:35:45.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab" in namespace "projected-8449" to be "Succeeded or Failed"
Mar 31 19:35:45.363: INFO: Pod "pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab": Phase="Pending", Reason="", readiness=false. Elapsed: 36.125903ms
Mar 31 19:35:47.393: INFO: Pod "pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.066908347s
STEP: Saw pod success
Mar 31 19:35:47.393: INFO: Pod "pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab" satisfied condition "Succeeded or Failed"
Mar 31 19:35:47.424: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:35:47.503: INFO: Waiting for pod pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab to disappear
Mar 31 19:35:47.533: INFO: Pod pod-projected-configmaps-c2bc0358-f83d-49ca-a204-51995d19eaab no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:35:47.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8449" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":283,"completed":200,"skipped":3322,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:35:47.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-283" for this suite.
STEP: Destroying namespace "nspatchtest-6cf7318d-7b9e-4bd0-8cee-cfe0ea96b149-7919" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":283,"completed":201,"skipped":3362,"failed":0}

------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Docker Containers
... skipping 2 lines ...
Mar 31 19:35:48.047: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Mar 31 19:35:48.219: INFO: Waiting up to 5m0s for pod "client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c" in namespace "containers-7466" to be "Succeeded or Failed"
Mar 31 19:35:48.251: INFO: Pod "client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.335899ms
Mar 31 19:35:50.281: INFO: Pod "client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062035449s
STEP: Saw pod success
Mar 31 19:35:50.281: INFO: Pod "client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c" satisfied condition "Succeeded or Failed"
Mar 31 19:35:50.310: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c container test-container: <nil>
STEP: delete the pod
Mar 31 19:35:50.390: INFO: Waiting for pod client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c to disappear
Mar 31 19:35:50.421: INFO: Pod client-containers-b12fde82-380e-4e40-868f-5a2a9efde87c no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
Mar 31 19:35:50.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7466" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":283,"completed":202,"skipped":3362,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:35:54.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5809" for this suite.
STEP: Destroying namespace "webhook-5809-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":283,"completed":203,"skipped":3364,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 28 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:35:56.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4122" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":283,"completed":204,"skipped":3371,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 17 lines ...
Mar 31 19:36:05.159: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:175
Mar 31 19:36:05.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7917" for this suite.
•{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":283,"completed":205,"skipped":3374,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:36:12.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5361" for this suite.
STEP: Destroying namespace "webhook-5361-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":283,"completed":206,"skipped":3382,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 60 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:36:16.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1272" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":283,"completed":207,"skipped":3423,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 20 lines ...
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 31 19:36:40.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6491" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":283,"completed":208,"skipped":3433,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Container Runtime
... skipping 12 lines ...
Mar 31 19:36:42.776: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:175
Mar 31 19:36:42.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7351" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":283,"completed":209,"skipped":3462,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Mar 31 19:36:50.386: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5457-crds.spec'
Mar 31 19:36:50.795: INFO: stderr: ""
Mar 31 19:36:50.795: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5457-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Mar 31 19:36:50.795: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5457-crds.spec.bars'
Mar 31 19:36:51.126: INFO: stderr: ""
Mar 31 19:36:51.126: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5457-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Mar 31 19:36:51.126: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig explain e2e-test-crd-publish-openapi-5457-crds.spec.bars2'
Mar 31 19:36:51.536: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:36:54.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2154" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":283,"completed":210,"skipped":3465,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 13 lines ...
Mar 31 19:37:15.005: INFO: Restart count of pod container-probe-8763/liveness-e176c473-1751-44c6-8ea2-43ef0fb1fc38 is now 1 (18.299962348s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:37:15.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8763" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":283,"completed":211,"skipped":3483,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 7 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 31 19:37:20.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8427" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":283,"completed":212,"skipped":3490,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Mar 31 19:37:20.563: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Mar 31 19:37:20.729: INFO: Waiting up to 5m0s for pod "downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6" in namespace "downward-api-4525" to be "Succeeded or Failed"
Mar 31 19:37:20.762: INFO: Pod "downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.117ms
Mar 31 19:37:22.791: INFO: Pod "downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062516353s
STEP: Saw pod success
Mar 31 19:37:22.791: INFO: Pod "downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6" satisfied condition "Succeeded or Failed"
Mar 31 19:37:22.821: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6 container dapi-container: <nil>
STEP: delete the pod
Mar 31 19:37:22.899: INFO: Waiting for pod downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6 to disappear
Mar 31 19:37:22.930: INFO: Pod downward-api-43b5875a-2c74-4f89-aecf-243c86fee2a6 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
Mar 31 19:37:22.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4525" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":283,"completed":213,"skipped":3514,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Watchers
... skipping 22 lines ...
Mar 31 19:37:33.573: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3233 /api/v1/namespaces/watch-3233/configmaps/e2e-watch-test-label-changed 6f144fab-e147-4284-af05-b3a6b624ce77 23673 0 2020-03-31 19:37:23 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Mar 31 19:37:33.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3233 /api/v1/namespaces/watch-3233/configmaps/e2e-watch-test-label-changed 6f144fab-e147-4284-af05-b3a6b624ce77 23674 0 2020-03-31 19:37:23 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:175
Mar 31 19:37:33.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3233" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":283,"completed":214,"skipped":3522,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] [sig-node] Events
... skipping 16 lines ...
Mar 31 19:37:40.022: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  test/e2e/framework/framework.go:175
Mar 31 19:37:40.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9334" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":283,"completed":215,"skipped":3534,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 33 lines ...

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
W0331 19:37:50.505488   25441 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 31 19:37:50.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7560" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":283,"completed":216,"skipped":3582,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  test/e2e/framework/framework.go:597
Mar 31 19:37:50.712: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:37:51.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4084" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":283,"completed":217,"skipped":3583,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:37:52.000: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 31 19:37:52.176: INFO: Waiting up to 5m0s for pod "pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571" in namespace "emptydir-7544" to be "Succeeded or Failed"
Mar 31 19:37:52.207: INFO: Pod "pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571": Phase="Pending", Reason="", readiness=false. Elapsed: 31.067195ms
Mar 31 19:37:54.237: INFO: Pod "pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061605834s
STEP: Saw pod success
Mar 31 19:37:54.237: INFO: Pod "pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571" satisfied condition "Succeeded or Failed"
Mar 31 19:37:54.267: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571 container test-container: <nil>
STEP: delete the pod
Mar 31 19:37:54.351: INFO: Waiting for pod pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571 to disappear
Mar 31 19:37:54.381: INFO: Pod pod-6d29678c-bf2c-4702-85e4-f7f5e70b3571 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:37:54.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7544" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":218,"skipped":3614,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
... skipping 42 lines ...
Mar 31 19:38:01.081: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:38:01.336: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:175
Mar 31 19:38:01.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7131" for this suite.
•{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":219,"skipped":3699,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-7189/secret-test-abe58c95-d393-470e-a724-fe75ece4929c
STEP: Creating a pod to test consume secrets
Mar 31 19:38:01.624: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e" in namespace "secrets-7189" to be "Succeeded or Failed"
Mar 31 19:38:01.655: INFO: Pod "pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.027196ms
Mar 31 19:38:03.687: INFO: Pod "pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062771382s
STEP: Saw pod success
Mar 31 19:38:03.687: INFO: Pod "pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e" satisfied condition "Succeeded or Failed"
Mar 31 19:38:03.718: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e container env-test: <nil>
STEP: delete the pod
Mar 31 19:38:03.795: INFO: Waiting for pod pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e to disappear
Mar 31 19:38:03.826: INFO: Pod pod-configmaps-b6a3950d-d4e6-4482-9227-7d2aa553913e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:38:03.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7189" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":283,"completed":220,"skipped":3704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 12 lines ...
Mar 31 19:38:06.179: INFO: Initial restart count of pod busybox-4699a9f4-aea2-4aa1-af57-83374a06a5c7 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:42:07.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9908" for this suite.
•{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":283,"completed":221,"skipped":3748,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 32 lines ...
Mar 31 19:42:12.519: INFO: stdout: "service/rm3 exposed\n"
Mar 31 19:42:12.550: INFO: Service rm3 in namespace kubectl-6752 found.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:42:14.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6752" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":283,"completed":222,"skipped":3749,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Mar 31 19:42:18.899: INFO: stderr: ""
Mar 31 19:42:18.899: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:42:18.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6320" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":283,"completed":223,"skipped":3752,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 24 lines ...
STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 31 19:42:23.634: INFO: File wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:23.668: INFO: Lookups using dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac failed for: [wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local]

Mar 31 19:42:28.701: INFO: File wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:28.732: INFO: Lookups using dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac failed for: [wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local]

Mar 31 19:42:33.701: INFO: File wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:33.731: INFO: File jessie_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains '' instead of 'bar.example.com.'
Mar 31 19:42:33.731: INFO: Lookups using dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac failed for: [wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local jessie_udp@dns-test-service-3.dns-609.svc.cluster.local]

Mar 31 19:42:38.701: INFO: File wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:38.732: INFO: Lookups using dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac failed for: [wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local]

Mar 31 19:42:43.733: INFO: File jessie_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:43.733: INFO: Lookups using dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac failed for: [jessie_udp@dns-test-service-3.dns-609.svc.cluster.local]

Mar 31 19:42:48.701: INFO: File wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:48.734: INFO: File jessie_udp@dns-test-service-3.dns-609.svc.cluster.local from pod  dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac contains 'foo.example.com.
' instead of 'bar.example.com.'
Mar 31 19:42:48.734: INFO: Lookups using dns-609/dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac failed for: [wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local jessie_udp@dns-test-service-3.dns-609.svc.cluster.local]

Mar 31 19:42:53.735: INFO: DNS probes using dns-test-f571a907-82c9-41bd-94bf-506945b2e7ac succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-609.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-609.svc.cluster.local; sleep 1; done
... skipping 9 lines ...
STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 19:42:56.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-609" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":283,"completed":224,"skipped":3764,"failed":0}

------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Mar 31 19:42:56.408: INFO: Asynchronously running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl kubectl --server=https://34.98.69.39:443 --kubeconfig=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig proxy --unix-socket=/tmp/kubectl-proxy-unix523768647/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:42:56.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3152" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":283,"completed":225,"skipped":3764,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Mar 31 19:42:58.818: INFO: Trying to dial the pod
Mar 31 19:43:03.911: INFO: Controller my-hostname-basic-78d31087-827c-4175-a308-aa082cd668d5: Got expected result from replica 1 [my-hostname-basic-78d31087-827c-4175-a308-aa082cd668d5-5kftk]: "my-hostname-basic-78d31087-827c-4175-a308-aa082cd668d5-5kftk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 31 19:43:03.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8636" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":283,"completed":226,"skipped":3800,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:43:04.002: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 31 19:43:04.179: INFO: Waiting up to 5m0s for pod "pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba" in namespace "emptydir-9641" to be "Succeeded or Failed"
Mar 31 19:43:04.211: INFO: Pod "pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 32.609249ms
Mar 31 19:43:06.242: INFO: Pod "pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063200657s
STEP: Saw pod success
Mar 31 19:43:06.242: INFO: Pod "pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba" satisfied condition "Succeeded or Failed"
Mar 31 19:43:06.271: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba container test-container: <nil>
STEP: delete the pod
Mar 31 19:43:06.348: INFO: Waiting for pod pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba to disappear
Mar 31 19:43:06.379: INFO: Pod pod-80520aff-55ed-4b2e-8a9a-a58d8ebcd6ba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:43:06.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9641" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":227,"skipped":3801,"failed":0}
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 26 lines ...
Mar 31 19:43:25.433: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:43:25.691: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 31 19:43:25.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7743" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":228,"skipped":3808,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-db0dedba-ccb7-4c51-9519-a7145fd093d6
STEP: Creating a pod to test consume configMaps
Mar 31 19:43:25.985: INFO: Waiting up to 5m0s for pod "pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c" in namespace "configmap-3147" to be "Succeeded or Failed"
Mar 31 19:43:26.017: INFO: Pod "pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.231332ms
Mar 31 19:43:28.049: INFO: Pod "pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.063846048s
STEP: Saw pod success
Mar 31 19:43:28.049: INFO: Pod "pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c" satisfied condition "Succeeded or Failed"
Mar 31 19:43:28.079: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:43:28.153: INFO: Waiting for pod pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c to disappear
Mar 31 19:43:28.183: INFO: Pod pod-configmaps-54c69b97-e64a-4da8-bb5e-4cbb63aae95c no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:43:28.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3147" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":229,"skipped":3816,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] StatefulSet
... skipping 39 lines ...
Mar 31 19:44:39.792: INFO: Waiting for statefulset status.replicas updated to 0
Mar 31 19:44:39.823: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:175
Mar 31 19:44:39.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6941" for this suite.
•{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":283,"completed":230,"skipped":3819,"failed":0}

------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Probing container
... skipping 19 lines ...
Mar 31 19:45:00.243: INFO: The status of Pod test-webserver-43aa6e44-5098-4040-a820-0886495f5dfb is Running (Ready = true)
Mar 31 19:45:00.273: INFO: Container started at 2020-03-31 19:44:40 +0000 UTC, pod became ready at 2020-03-31 19:44:59 +0000 UTC
[AfterEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:175
Mar 31 19:45:00.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2109" for this suite.
•{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":283,"completed":231,"skipped":3819,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:45:05.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2254" for this suite.
STEP: Destroying namespace "webhook-2254-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":283,"completed":232,"skipped":3843,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:45:05.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2332" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":283,"completed":233,"skipped":3875,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] ReplicationController
... skipping 10 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:175
Mar 31 19:45:08.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7286" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":283,"completed":234,"skipped":3894,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 19:45:08.324: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c8fbe907-92e0-462a-9a7a-209f99acff2e" in namespace "security-context-test-5727" to be "Succeeded or Failed"
Mar 31 19:45:08.356: INFO: Pod "busybox-readonly-false-c8fbe907-92e0-462a-9a7a-209f99acff2e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.727175ms
Mar 31 19:45:10.397: INFO: Pod "busybox-readonly-false-c8fbe907-92e0-462a-9a7a-209f99acff2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.072102303s
Mar 31 19:45:10.397: INFO: Pod "busybox-readonly-false-c8fbe907-92e0-462a-9a7a-209f99acff2e" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 31 19:45:10.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5727" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":283,"completed":235,"skipped":3912,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 13 lines ...
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:45:27.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7036" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":283,"completed":236,"skipped":3932,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Updating configmap configmap-test-upd-86769206-f0f3-4eed-b429-9680279d1766
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:45:34.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-444" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":237,"skipped":3935,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:45:34.773: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23" in namespace "downward-api-2866" to be "Succeeded or Failed"
Mar 31 19:45:34.805: INFO: Pod "downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23": Phase="Pending", Reason="", readiness=false. Elapsed: 32.084567ms
Mar 31 19:45:36.836: INFO: Pod "downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062453021s
STEP: Saw pod success
Mar 31 19:45:36.836: INFO: Pod "downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23" satisfied condition "Succeeded or Failed"
Mar 31 19:45:36.865: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23 container client-container: <nil>
STEP: delete the pod
Mar 31 19:45:36.959: INFO: Waiting for pod downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23 to disappear
Mar 31 19:45:36.991: INFO: Pod downwardapi-volume-01c3229a-3e67-45bb-be28-00e30d4bea23 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:45:36.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2866" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":283,"completed":238,"skipped":3948,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-454d6189-7f52-4d44-9ebe-973c856ae608
STEP: Creating a pod to test consume configMaps
Mar 31 19:45:37.293: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01" in namespace "projected-2828" to be "Succeeded or Failed"
Mar 31 19:45:37.323: INFO: Pod "pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01": Phase="Pending", Reason="", readiness=false. Elapsed: 30.56386ms
Mar 31 19:45:39.353: INFO: Pod "pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060201169s
STEP: Saw pod success
Mar 31 19:45:39.353: INFO: Pod "pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01" satisfied condition "Succeeded or Failed"
Mar 31 19:45:39.383: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:45:39.462: INFO: Waiting for pod pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01 to disappear
Mar 31 19:45:39.493: INFO: Pod pod-projected-configmaps-caccd47b-9f94-4a89-8c1c-aab61178cc01 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:45:39.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2828" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":283,"completed":239,"skipped":3955,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:45:39.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad" in namespace "downward-api-6045" to be "Succeeded or Failed"
Mar 31 19:45:39.787: INFO: Pod "downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad": Phase="Pending", Reason="", readiness=false. Elapsed: 34.427385ms
Mar 31 19:45:41.817: INFO: Pod "downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064639036s
STEP: Saw pod success
Mar 31 19:45:41.817: INFO: Pod "downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad" satisfied condition "Succeeded or Failed"
Mar 31 19:45:41.847: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad container client-container: <nil>
STEP: delete the pod
Mar 31 19:45:41.923: INFO: Waiting for pod downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad to disappear
Mar 31 19:45:41.954: INFO: Pod downwardapi-volume-c5a1644f-0224-4b5d-81d7-c2a2c1979cad no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:45:41.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6045" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":240,"skipped":3961,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Mar 31 19:45:44.450: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:44.481: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:44.574: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:44.606: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:44.638: INFO: Unable to read jessie_udp@dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:44.670: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:44.736: INFO: Lookups using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_udp@dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-test-service-2.dns-829.svc.cluster.local]

Mar 31 19:45:49.768: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:49.799: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:49.957: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:49.989: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:50.116: INFO: Lookups using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local]

Mar 31 19:45:54.776: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:54.808: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:54.964: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:54.996: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:55.120: INFO: Lookups using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local]

Mar 31 19:45:59.767: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:59.799: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:59.956: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:45:59.990: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:00.116: INFO: Lookups using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local]

Mar 31 19:46:04.771: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:04.803: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:04.959: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:04.991: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:05.118: INFO: Lookups using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local]

Mar 31 19:46:09.769: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:09.802: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:09.962: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:09.994: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local from pod dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71: the server could not find the requested resource (get pods dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71)
Mar 31 19:46:10.120: INFO: Lookups using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-829.svc.cluster.local]

Mar 31 19:46:15.125: INFO: DNS probes using dns-829/dns-test-cc646320-7f04-4b29-9aa4-b17e16b70c71 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:175
Mar 31 19:46:15.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-829" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":283,"completed":241,"skipped":3971,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:46:21.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1286" for this suite.
STEP: Destroying namespace "webhook-1286-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":283,"completed":242,"skipped":3996,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 55 lines ...
Mar 31 19:46:45.225: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7187/pods","resourceVersion":"26331"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:175
Mar 31 19:46:45.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7187" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":283,"completed":243,"skipped":4000,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:46:45.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b" in namespace "downward-api-5768" to be "Succeeded or Failed"
Mar 31 19:46:45.619: INFO: Pod "downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.863293ms
Mar 31 19:46:47.649: INFO: Pod "downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064469565s
STEP: Saw pod success
Mar 31 19:46:47.649: INFO: Pod "downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b" satisfied condition "Succeeded or Failed"
Mar 31 19:46:47.679: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b container client-container: <nil>
STEP: delete the pod
Mar 31 19:46:47.756: INFO: Waiting for pod downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b to disappear
Mar 31 19:46:47.785: INFO: Pod downwardapi-volume-792fa591-6cb4-4e48-9559-f0ba68159c2b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:46:47.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5768" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":283,"completed":244,"skipped":4013,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:46:59.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5337" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":283,"completed":245,"skipped":4023,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:46:59.343: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Mar 31 19:46:59.512: INFO: Waiting up to 5m0s for pod "pod-ebd56785-9ec5-429f-9679-e3e5fa760041" in namespace "emptydir-6267" to be "Succeeded or Failed"
Mar 31 19:46:59.543: INFO: Pod "pod-ebd56785-9ec5-429f-9679-e3e5fa760041": Phase="Pending", Reason="", readiness=false. Elapsed: 30.970569ms
Mar 31 19:47:01.574: INFO: Pod "pod-ebd56785-9ec5-429f-9679-e3e5fa760041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061502824s
STEP: Saw pod success
Mar 31 19:47:01.574: INFO: Pod "pod-ebd56785-9ec5-429f-9679-e3e5fa760041" satisfied condition "Succeeded or Failed"
Mar 31 19:47:01.604: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-ebd56785-9ec5-429f-9679-e3e5fa760041 container test-container: <nil>
STEP: delete the pod
Mar 31 19:47:01.681: INFO: Waiting for pod pod-ebd56785-9ec5-429f-9679-e3e5fa760041 to disappear
Mar 31 19:47:01.713: INFO: Pod pod-ebd56785-9ec5-429f-9679-e3e5fa760041 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:47:01.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6267" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":246,"skipped":4039,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-afc0e22f-734e-44fc-be25-13c5979dc7b9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:48:17.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-85" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":247,"skipped":4042,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:175
Mar 31 19:48:34.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2662" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":283,"completed":248,"skipped":4082,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Networking
... skipping 25 lines ...
Mar 31 19:48:51.402: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
Mar 31 19:48:51.650: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  test/e2e/framework/framework.go:175
Mar 31 19:48:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6210" for this suite.
•{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":283,"completed":249,"skipped":4091,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:48:51.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb" in namespace "projected-6169" to be "Succeeded or Failed"
Mar 31 19:48:51.939: INFO: Pod "downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.74054ms
Mar 31 19:48:53.969: INFO: Pod "downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.059568507s
STEP: Saw pod success
Mar 31 19:48:53.969: INFO: Pod "downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb" satisfied condition "Succeeded or Failed"
Mar 31 19:48:53.999: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb container client-container: <nil>
STEP: delete the pod
Mar 31 19:48:54.086: INFO: Waiting for pod downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb to disappear
Mar 31 19:48:54.118: INFO: Pod downwardapi-volume-852c9ffd-18d1-4cfb-9f39-b9b4ab0dd3bb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
Mar 31 19:48:54.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6169" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":283,"completed":250,"skipped":4099,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 24 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 19:48:58.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7970" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":283,"completed":251,"skipped":4102,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-32a1c54a-f6d8-44c0-abd5-4ea164a80f9c
STEP: Creating a pod to test consume secrets
Mar 31 19:48:58.416: INFO: Waiting up to 5m0s for pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8" in namespace "secrets-9337" to be "Succeeded or Failed"
Mar 31 19:48:58.446: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.630655ms
Mar 31 19:49:00.476: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059695495s
Mar 31 19:49:02.506: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090078367s
Mar 31 19:49:04.537: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120512718s
Mar 31 19:49:06.567: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15057441s
Mar 31 19:49:08.597: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181133838s
STEP: Saw pod success
Mar 31 19:49:08.597: INFO: Pod "pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8" satisfied condition "Succeeded or Failed"
Mar 31 19:49:08.627: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8 container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 19:49:08.706: INFO: Waiting for pod pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8 to disappear
Mar 31 19:49:08.735: INFO: Pod pod-secrets-f91e114c-656c-4c0d-a7e0-ac36e9c1b7f8 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:49:08.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9337" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":252,"skipped":4124,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-355f3c64-f7c5-446a-8cbc-3942a0d9fe94
STEP: Creating a pod to test consume secrets
Mar 31 19:49:09.173: INFO: Waiting up to 5m0s for pod "pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37" in namespace "secrets-2357" to be "Succeeded or Failed"
Mar 31 19:49:09.205: INFO: Pod "pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37": Phase="Pending", Reason="", readiness=false. Elapsed: 32.068336ms
Mar 31 19:49:11.236: INFO: Pod "pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062473265s
STEP: Saw pod success
Mar 31 19:49:11.236: INFO: Pod "pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37" satisfied condition "Succeeded or Failed"
Mar 31 19:49:11.266: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37 container secret-volume-test: <nil>
STEP: delete the pod
Mar 31 19:49:11.343: INFO: Waiting for pod pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37 to disappear
Mar 31 19:49:11.372: INFO: Pod pod-secrets-b474da0e-c748-4f49-b530-a17c6689eb37 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
Mar 31 19:49:11.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2357" for this suite.
STEP: Destroying namespace "secret-namespace-1210" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":283,"completed":253,"skipped":4149,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:49:15.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3814" for this suite.
STEP: Destroying namespace "webhook-3814-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":283,"completed":254,"skipped":4160,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-projected-z6xn
STEP: Creating a pod to test atomic-volume-subpath
Mar 31 19:49:15.989: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-z6xn" in namespace "subpath-344" to be "Succeeded or Failed"
Mar 31 19:49:16.018: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Pending", Reason="", readiness=false. Elapsed: 29.297454ms
Mar 31 19:49:18.049: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 2.059766143s
Mar 31 19:49:20.079: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 4.090042551s
Mar 31 19:49:22.110: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 6.120318669s
Mar 31 19:49:24.140: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 8.150663722s
Mar 31 19:49:26.170: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 10.181271315s
Mar 31 19:49:28.201: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 12.211651364s
Mar 31 19:49:30.231: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 14.242107946s
Mar 31 19:49:32.262: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 16.273063409s
Mar 31 19:49:34.294: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 18.305037707s
Mar 31 19:49:36.326: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Running", Reason="", readiness=true. Elapsed: 20.336471095s
Mar 31 19:49:38.356: INFO: Pod "pod-subpath-test-projected-z6xn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.366374704s
STEP: Saw pod success
Mar 31 19:49:38.356: INFO: Pod "pod-subpath-test-projected-z6xn" satisfied condition "Succeeded or Failed"
Mar 31 19:49:38.385: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-subpath-test-projected-z6xn container test-container-subpath-projected-z6xn: <nil>
STEP: delete the pod
Mar 31 19:49:38.461: INFO: Waiting for pod pod-subpath-test-projected-z6xn to disappear
Mar 31 19:49:38.491: INFO: Pod pod-subpath-test-projected-z6xn no longer exists
STEP: Deleting pod pod-subpath-test-projected-z6xn
Mar 31 19:49:38.491: INFO: Deleting pod "pod-subpath-test-projected-z6xn" in namespace "subpath-344"
[AfterEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:175
Mar 31 19:49:38.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-344" for this suite.
•{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":283,"completed":255,"skipped":4166,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Mar 31 19:49:38.611: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on node default medium
Mar 31 19:49:38.780: INFO: Waiting up to 5m0s for pod "pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d" in namespace "emptydir-9751" to be "Succeeded or Failed"
Mar 31 19:49:38.811: INFO: Pod "pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.006676ms
Mar 31 19:49:40.842: INFO: Pod "pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.062041543s
STEP: Saw pod success
Mar 31 19:49:40.842: INFO: Pod "pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d" satisfied condition "Succeeded or Failed"
Mar 31 19:49:40.872: INFO: Trying to get logs from node test1-md-0-zgc56.c.k8s-gce-serial-1-5.internal pod pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d container test-container: <nil>
STEP: delete the pod
Mar 31 19:49:40.951: INFO: Waiting for pod pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d to disappear
Mar 31 19:49:40.981: INFO: Pod pod-a66f974a-b6e9-4a97-a636-9fd7a635ef7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
Mar 31 19:49:40.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9751" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":256,"skipped":4185,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Pods
... skipping 10 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Mar 31 19:49:43.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7815" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":283,"completed":257,"skipped":4188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 9 lines ...
STEP: Creating the pod
Mar 31 19:49:46.403: INFO: Successfully updated pod "labelsupdateab487be3-5123-48b7-9cd1-5c5cbd559949"
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:49:50.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-591" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":283,"completed":258,"skipped":4249,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-6ee7a18e-fe9d-4c93-8847-4b162ddc1cae
STEP: Creating a pod to test consume configMaps
Mar 31 19:49:50.798: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb" in namespace "projected-7069" to be "Succeeded or Failed"
Mar 31 19:49:50.832: INFO: Pod "pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.960992ms
Mar 31 19:49:52.862: INFO: Pod "pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.064149226s
STEP: Saw pod success
Mar 31 19:49:52.862: INFO: Pod "pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb" satisfied condition "Succeeded or Failed"
Mar 31 19:49:52.893: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb container projected-configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:49:52.970: INFO: Waiting for pod pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb to disappear
Mar 31 19:49:52.999: INFO: Pod pod-projected-configmaps-b71331ab-436a-443f-ab7b-f6a4003ba6bb no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
Mar 31 19:49:52.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7069" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":283,"completed":259,"skipped":4250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Projected secret
... skipping 12 lines ...
STEP: Creating secret with name s-test-opt-create-7c7cccc2-d084-418f-9f80-c311aee57957
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
Mar 31 19:51:09.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1492" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":283,"completed":260,"skipped":4283,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Mar 31 19:51:15.270: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:51:27.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6384" for this suite.
STEP: Destroying namespace "webhook-6384-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":283,"completed":261,"skipped":4297,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-95d967a0-4ba8-4c47-b53c-616dfb8737c1
STEP: Creating a pod to test consume configMaps
Mar 31 19:51:28.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab" in namespace "configmap-3191" to be "Succeeded or Failed"
Mar 31 19:51:28.494: INFO: Pod "pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab": Phase="Pending", Reason="", readiness=false. Elapsed: 31.146831ms
Mar 31 19:51:30.524: INFO: Pod "pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.061151163s
STEP: Saw pod success
Mar 31 19:51:30.524: INFO: Pod "pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab" satisfied condition "Succeeded or Failed"
Mar 31 19:51:30.553: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab container configmap-volume-test: <nil>
STEP: delete the pod
Mar 31 19:51:30.628: INFO: Waiting for pod pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab to disappear
Mar 31 19:51:30.660: INFO: Pod pod-configmaps-f473f6b1-b434-43ca-9b08-fc9a35a2ebab no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
Mar 31 19:51:30.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3191" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":262,"skipped":4319,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 19:51:30.750: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Mar 31 19:51:30.881: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Mar 31 19:51:33.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-393" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":283,"completed":263,"skipped":4342,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Mar 31 19:51:33.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6243" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:707
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":283,"completed":264,"skipped":4357,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Mar 31 19:51:34.183: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-754d66d7-3eef-4092-b6a9-78f5cb8295cd" in namespace "security-context-test-1770" to be "Succeeded or Failed"
Mar 31 19:51:34.217: INFO: Pod "busybox-privileged-false-754d66d7-3eef-4092-b6a9-78f5cb8295cd": Phase="Pending", Reason="", readiness=false. Elapsed: 33.515226ms
Mar 31 19:51:36.251: INFO: Pod "busybox-privileged-false-754d66d7-3eef-4092-b6a9-78f5cb8295cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.068037644s
Mar 31 19:51:36.251: INFO: Pod "busybox-privileged-false-754d66d7-3eef-4092-b6a9-78f5cb8295cd" satisfied condition "Succeeded or Failed"
Mar 31 19:51:36.288: INFO: Got logs for pod "busybox-privileged-false-754d66d7-3eef-4092-b6a9-78f5cb8295cd": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Mar 31 19:51:36.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1770" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":283,"completed":265,"skipped":4378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
Mar 31 19:51:42.066: INFO: stderr: ""
Mar 31 19:51:42.067: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4310-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     <empty>\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Mar 31 19:51:45.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9092" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":283,"completed":266,"skipped":4408,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
Mar 31 19:51:53.253: INFO: stderr: ""
Mar 31 19:51:53.253: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Mar 31 19:51:53.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-519" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":283,"completed":267,"skipped":4416,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Mar 31 19:51:53.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7" in namespace "downward-api-1357" to be "Succeeded or Failed"
Mar 31 19:51:53.547: INFO: Pod "downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.868749ms
Mar 31 19:51:55.577: INFO: Pod "downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.060365926s
STEP: Saw pod success
Mar 31 19:51:55.577: INFO: Pod "downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7" satisfied condition "Succeeded or Failed"
Mar 31 19:51:55.607: INFO: Trying to get logs from node test1-md-0-xq99b.c.k8s-gce-serial-1-5.internal pod downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7 container client-container: <nil>
STEP: delete the pod
Mar 31 19:51:55.683: INFO: Waiting for pod downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7 to disappear
Mar 31 19:51:55.713: INFO: Pod downwardapi-volume-cd418dca-bb89-4222-ad77-fc23e38c4af7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
Mar 31 19:51:55.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1357" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":283,"completed":268,"skipped":4423,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:175
Mar 31 19:52:01.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8971" for this suite.
STEP: Destroying namespace "webhook-8971-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":283,"completed":269,"skipped":4425,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Mar 31 19:52:01.631: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-gcp/kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod with failed condition
{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-31T19:53:09Z"}
{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-31T19:53:24Z"}