This job view page is being replaced by Spyglass soon. Check out the new job view.
PRandrewsykim: support configuration of kube-proxy IPVS tcp,tcpfin,udp timeout
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-12-17 09:24
Elapsed11m55s
Revision0ea25ec674978550813b505585b0c2174fcd9d35
Refs 85517

No Test Failures!


Error lines from build-log.txt

... skipping 194 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.3
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 114 lines ...
I1217 09:29:03.650292     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
I1217 09:29:03.650643     144 request.go:853] Got a Retry-After 1s response for attempt 3 to https://172.17.0.3:6443/healthz?timeout=10s
I1217 09:29:04.651206     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 0 milliseconds
I1217 09:29:04.651274     144 request.go:853] Got a Retry-After 1s response for attempt 4 to https://172.17.0.3:6443/healthz?timeout=10s
I1217 09:29:05.653148     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s  in 1 milliseconds
I1217 09:29:05.653715     144 request.go:853] Got a Retry-After 1s response for attempt 5 to https://172.17.0.3:6443/healthz?timeout=10s
I1217 09:29:11.609759     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 4955 milliseconds
I1217 09:29:12.116413     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
I1217 09:29:12.613490     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I1217 09:29:13.112988     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I1217 09:29:13.614788     144 round_trippers.go:443] GET https://172.17.0.3:6443/healthz?timeout=10s 200 OK in 4 milliseconds
[apiclient] All control plane components are healthy after 11.968617 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1217 09:29:13.616504     144 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I1217 09:29:13.629238     144 round_trippers.go:443] POST https://172.17.0.3:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 10 milliseconds
I1217 09:29:13.637769     144 round_trippers.go:443] POST https://172.17.0.3:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 6 milliseconds
... skipping 334 lines ...
Will run 4840 specs

Running in parallel across 25 nodes

Dec 17 09:30:27.747: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Dec 17 09:30:27.751: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Dec 17 09:30:28.027: INFO: Condition Ready of node kind-worker is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Dec 17 09:30:28.028: INFO: Condition Ready of node kind-worker2 is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Dec 17 09:30:28.028: INFO: Unschedulable nodes:
Dec 17 09:30:28.028: INFO: -> kind-worker Ready=false Network=false Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master
Dec 17 09:30:28.028: INFO: -> kind-worker2 Ready=false Network=false Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master
Dec 17 09:30:28.028: INFO: ================================
Dec 17 09:30:58.035: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Dec 17 09:30:58.090: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
... skipping 662 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:30:58.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-391" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:30:58.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2301" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:30:59.022: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 53 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:01.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1775" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:01.960: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 80 lines ...
• [SLOW TEST:10.671 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:08.963: INFO: Driver local doesn't support ext4 -- skipping
... skipping 73 lines ...
• [SLOW TEST:13.756 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:680
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:117
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:11.922: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
• [SLOW TEST:18.737 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl patch
  test/e2e/kubectl/kubectl.go:1540
    should add annotations for pods in rc  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:20.676 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:19.001: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 45 lines ...
• [SLOW TEST:22.179 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  test/e2e/apimachinery/resource_quota.go:559
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 80 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [k8s.io] NodeLease
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:22.007: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename node-lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:22.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-1530" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:22.025: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:22.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4544" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:32.724: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 55 lines ...
  test/e2e/kubectl/portforward.go:464
    that expects a client request
    test/e2e/kubectl/portforward.go:465
      should support a client that connects, sends NO DATA, and disconnects
      test/e2e/kubectl/portforward.go:466
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:32.829: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
Dec 17 09:31:33.017: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.188 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  test/e2e/storage/persistent_volumes-gce.go:138

  Only supported for providers [gce gke] (not skeleton)

  test/e2e/storage/persistent_volumes-gce.go:82
------------------------------
... skipping 42 lines ...
• [SLOW TEST:35.394 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:17.921 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:35.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2758" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:35.091: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 112 lines ...
• [SLOW TEST:37.506 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  test/e2e/apimachinery/garbage_collector.go:437
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":1,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
Dec 17 09:31:16.624: INFO: Waiting for PV local-pv5whtp to bind to PVC pvc-xgrlw
Dec 17 09:31:16.624: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-xgrlw] to have phase Bound
Dec 17 09:31:16.665: INFO: PersistentVolumeClaim pvc-xgrlw found but phase is Pending instead of Bound.
Dec 17 09:31:18.692: INFO: PersistentVolumeClaim pvc-xgrlw found and phase=Bound (2.067177577s)
Dec 17 09:31:18.692: INFO: Waiting up to 3m0s for PersistentVolume local-pv5whtp to have phase Bound
Dec 17 09:31:18.702: INFO: PersistentVolume local-pv5whtp found and phase=Bound (10.50909ms)
[It] should fail scheduling due to different NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:359
STEP: local-volume-type: dir
STEP: Initializing test volumes
Dec 17 09:31:18.742: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-fc5e9deb-bf0d-429d-8106-3261a818dc21] Namespace:persistent-local-volumes-test-9305 PodName:hostexec-kind-worker-p5bcz ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Dec 17 09:31:18.742: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Creating local PVCs and PVs
... skipping 30 lines ...

• [SLOW TEST:37.621 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:337
    should fail scheduling due to different NodeAffinity
    test/e2e/storage/persistent_volumes-local.go:359
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:36.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2588" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":2,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:43.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8791" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:43.086: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:31:43.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 44 lines ...
• [SLOW TEST:10.666 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:33
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  test/e2e/common/downward_api.go:108
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:12.303 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:48.319: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 59 lines ...
• [SLOW TEST:15.420 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:50.264: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":2,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 123 lines ...
  test/e2e/storage/vsphere/vsphere_zone_support.go:205

  Only supported for providers [vsphere] (not skeleton)

  test/e2e/storage/vsphere/vsphere_zone_support.go:105
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] Volume Placement
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:50.387: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename volume-placement
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 81 lines ...
  test/e2e/kubectl/kubectl.go:279
[It] should check if cluster-info dump succeeds
  test/e2e/kubectl/kubectl.go:1149
STEP: running cluster-info dump
Dec 17 09:31:50.579: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config cluster-info dump'
Dec 17 09:31:51.029: INFO: stderr: ""
Dec 17 09:31:51.030: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/nodes\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane\",\n                \"selfLink\": \"/api/v1/nodes/kind-control-plane\",\n                \"uid\": \"02aad5bd-a337-4316-8440-c7d9935250c5\",\n                \"resourceVersion\": \"619\",\n                \"creationTimestamp\": \"2019-12-17T09:29:11Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-control-plane\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.0.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.0.0/24\"\n                ],\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:30:15Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:07Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:30:15Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:07Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:30:15Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:07Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:30:15Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:30:15Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.3\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-control-plane\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"46f972d878214ada87e2089c796d4cfd\",\n                    \"systemUUID\": \"c5b3c1dd-2ff1-45ba-b242-608e8b9ddcfb\",\n                    \"bootID\": \"57c65ac7-2347-49e9-a3b1-68acb0049308\",\n                    \"kernelVersion\": \"4.14.138+\",\n                    \"osImage\": \"Ubuntu 19.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.2\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.0.1812+5ad586f84e16e5\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.0.1812+5ad586f84e16e5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 196936859\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 181802148\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 123709946\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 102616218\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:0.5.3\"\n                        ],\n                        \"sizeBytes\": 80345874\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/debian-base:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 53884301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.11\"\n                        ],\n                        \"sizeBytes\": 36513375\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker\",\n                \"uid\": \"d0d178e3-45e1-428d-8224-18ccb519c892\",\n                \"resourceVersion\": \"3049\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubelet_cleanup\": \"true\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker\",\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.1.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.1.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:30:34Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.2\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"a7d6ab9ec7954386917aeef08757d623\",\n                    \"systemUUID\": \"8149bafd-1aa1-4454-8018-85b865744e73\",\n                    \"bootID\": \"57c65ac7-2347-49e9-a3b1-68acb0049308\",\n                    \"kernelVersion\": \"4.14.138+\",\n                    \"osImage\": \"Ubuntu 19.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.2\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.0.1812+5ad586f84e16e5\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.0.1812+5ad586f84e16e5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 196936859\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 181802148\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 123709946\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 102616218\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:0.5.3\"\n                        ],\n                        \"sizeBytes\": 80345874\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/debian-base:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 53884301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.11\"\n                        ],\n                        \"sizeBytes\": 36513375\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\",\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n                        ],\n                        \"sizeBytes\": 17444032\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                            \"docker.io/library/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6978806\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n                        ],\n                        \"sizeBytes\": 599341\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker2\",\n                \"uid\": \"8d9fe470-04b7-40aa-844e-6b31492ca35f\",\n                \"resourceVersion\": \"3050\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubelet_cleanup\": \"true\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker2\",\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.2.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.2.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2019-12-17T09:31:44Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:30:34Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.4\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker2\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"f1e4e997ccb5453db94980b349719e49\",\n                    \"systemUUID\": \"f8c98249-af63-4cf4-b6c9-500ea253cab8\",\n                    \"bootID\": \"57c65ac7-2347-49e9-a3b1-68acb0049308\",\n                    \"kernelVersion\": \"4.14.138+\",\n                    \"osImage\": \"Ubuntu 19.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.2\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.0.1812+5ad586f84e16e5\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.0.1812+5ad586f84e16e5\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 196936859\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 181802148\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 123709946\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1812_5ad586f84e16e5\"\n                        ],\n                        \"sizeBytes\": 102616218\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:0.5.3\"\n                        ],\n                        \"sizeBytes\": 80345874\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/debian-base:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 53884301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.11\"\n                        ],\n                        \"sizeBytes\": 36513375\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\",\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n                        ],\n                        \"sizeBytes\": 17444032\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                            \"docker.io/library/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6978806\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n                        ],\n                        \"sizeBytes\": 599341\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/events\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11e95d5f2e53c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11e95d5f2e53c\",\n                \"uid\": \"0230715f-8326-4c07-9552-fc9170e4eee1\",\n                \"resourceVersion\": \"395\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"383\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11e9aaf06aea3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11e9aaf06aea3\",\n                \"uid\": \"42e7bf15-00f1-48a8-9b6b-cbe09518506e\",\n                \"resourceVersion\": \"497\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"394\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11e9b09a3396a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11e9b09a3396a\",\n                \"uid\": \"77096f7f-f481-4ad8-9cc6-39d590afe9fd\",\n                \"resourceVersion\": \"614\",\n                \"creationTimestamp\": \"2019-12-17T09:29:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"491\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:14Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11ea0295ab08e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11ea0295ab08e\",\n                \"uid\": \"23cdd3b2-07cc-4797-921e-0ea528b16e00\",\n                \"resourceVersion\": \"627\",\n                \"creationTimestamp\": \"2019-12-17T09:30:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"534\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-6955765f44-rdtng to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:17Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11ea04fb1a3a6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11ea04fb1a3a6\",\n                \"uid\": \"804e215b-c20d-40cd-bc7a-0c875c0450c4\",\n                \"resourceVersion\": \"633\",\n                \"creationTimestamp\": \"2019-12-17T09:30:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"624\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:18Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11ea083bc32d1\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11ea083bc32d1\",\n                \"uid\": \"e2fa3d04-4f7a-4cec-9ab2-e78cc41962eb\",\n                \"resourceVersion\": \"641\",\n                \"creationTimestamp\": \"2019-12-17T09:30:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"624\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng.15e11ea09148b03e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-rdtng.15e11ea09148b03e\",\n                \"uid\": \"02956784-fbb2-4c4a-8087-67f330f88d81\",\n                \"resourceVersion\": \"645\",\n                \"creationTimestamp\": \"2019-12-17T09:30:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"624\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11e95d4f68f4c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11e95d4f68f4c\",\n                \"uid\": \"12f98381-cc6d-4bac-a08e-c12af04a3ab1\",\n                \"resourceVersion\": \"385\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"380\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11e9aadc0d752\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11e9aadc0d752\",\n                \"uid\": \"9e3ea379-d658-4ab2-8ebe-96bce44838d2\",\n                \"resourceVersion\": \"481\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"384\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11e9b08f32b9e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11e9b08f32b9e\",\n                \"uid\": \"83d51b1d-141d-46d0-867c-5a7a66576335\",\n                \"resourceVersion\": \"613\",\n                \"creationTimestamp\": \"2019-12-17T09:29:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"484\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:14Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11ea02a0cd067\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11ea02a0cd067\",\n                \"uid\": \"f2794bfb-d7be-406c-80e7-923befe47a21\",\n                \"resourceVersion\": \"628\",\n                \"creationTimestamp\": \"2019-12-17T09:30:17Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"531\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-6955765f44-whdtq to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:17Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:17Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11ea053aea82f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11ea053aea82f\",\n                \"uid\": \"ca3e55b7-aaab-4f4f-bd93-1cc7776c5b1a\",\n                \"resourceVersion\": \"635\",\n                \"creationTimestamp\": \"2019-12-17T09:30:18Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"625\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:18Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:18Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11ea082bc48fb\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11ea082bc48fb\",\n                \"uid\": \"492b4170-dd68-4f22-948f-1b8418abaeba\",\n                \"resourceVersion\": \"640\",\n                \"creationTimestamp\": \"2019-12-17T09:30:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"625\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq.15e11ea08cafcfae\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-whdtq.15e11ea08cafcfae\",\n                \"uid\": \"5a7dc6f0-63be-46e7-8f41-909a3a516b86\",\n                \"resourceVersion\": \"643\",\n                \"creationTimestamp\": \"2019-12-17T09:30:19Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"625\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:19Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44.15e11e95d4e22584\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44.15e11e95d4e22584\",\n                \"uid\": \"1cd02a07-5e54-43ed-ad0f-ca0795ef6e52\",\n                \"resourceVersion\": \"386\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44\",\n                \"uid\": \"79d299c6-fcf1-4e08-bc78-d59aea3b0484\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"377\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-6955765f44-whdtq\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44.15e11e95d528b909\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44.15e11e95d528b909\",\n                \"uid\": \"19241740-f165-4fa8-96e6-13696d2c590a\",\n                \"resourceVersion\": \"393\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44\",\n                \"uid\": \"79d299c6-fcf1-4e08-bc78-d59aea3b0484\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"377\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-6955765f44-rdtng\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.15e11e95d4730b77\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns.15e11e95d4730b77\",\n                \"uid\": \"67689ef8-e02d-449b-9c24-bac0d39e937e\",\n                \"resourceVersion\": \"381\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"886abdbc-ccaa-4f61-90ef-17f38897a9f6\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"180\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-6955765f44 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-4gr5t.15e11e9ab6bfb2d6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-4gr5t.15e11e9ab6bfb2d6\",\n                \"uid\": \"1fc91614-9eb8-4100-8297-e64a39a746ee\",\n                \"resourceVersion\": \"521\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-4gr5t\",\n                \"uid\": \"1044ca91-a15c-431e-a1ed-64eba6ec2cd5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"511\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-4gr5t to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-4gr5t.15e11e9ae5e579bc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-4gr5t.15e11e9ae5e579bc\",\n                \"uid\": \"6205b5e6-e448-4680-af57-4c8082957e92\",\n                \"resourceVersion\": \"530\",\n                \"creationTimestamp\": \"2019-12-17T09:29:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-4gr5t\",\n                \"uid\": \"1044ca91-a15c-431e-a1ed-64eba6ec2cd5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:0.5.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-4gr5t.15e11e9b7ec12a9d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-4gr5t.15e11e9b7ec12a9d\",\n                \"uid\": \"f024e755-f334-4eeb-aa9c-eecb1dbe8c6a\",\n                \"resourceVersion\": \"544\",\n                \"creationTimestamp\": \"2019-12-17T09:29:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-4gr5t\",\n                \"uid\": \"1044ca91-a15c-431e-a1ed-64eba6ec2cd5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-4gr5t.15e11e9b9981864f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-4gr5t.15e11e9b9981864f\",\n                \"uid\": \"14118791-72aa-4004-a7bf-b529f6a04b39\",\n                \"resourceVersion\": \"548\",\n                \"creationTimestamp\": \"2019-12-17T09:29:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-4gr5t\",\n                \"uid\": \"1044ca91-a15c-431e-a1ed-64eba6ec2cd5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-b98rv.15e11e9ab1673370\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-b98rv.15e11e9ab1673370\",\n                \"uid\": \"9c1214e5-2f30-497f-bab8-b79330ae2ac9\",\n                \"resourceVersion\": \"505\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-b98rv\",\n                \"uid\": \"0a602bf8-0ee7-4332-97f3-9609571dc739\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"487\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-b98rv to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-b98rv.15e11e9adf047cc2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-b98rv.15e11e9adf047cc2\",\n                \"uid\": \"554acbb6-13c0-4c2e-a595-33b8320eab32\",\n                \"resourceVersion\": \"528\",\n                \"creationTimestamp\": \"2019-12-17T09:29:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-b98rv\",\n                \"uid\": \"0a602bf8-0ee7-4332-97f3-9609571dc739\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"503\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:0.5.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-b98rv.15e11e9b7e887cf3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-b98rv.15e11e9b7e887cf3\",\n                \"uid\": \"a8c7db2e-d0a7-439d-8694-6aa5655c0562\",\n                \"resourceVersion\": \"543\",\n                \"creationTimestamp\": \"2019-12-17T09:29:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-b98rv\",\n                \"uid\": \"0a602bf8-0ee7-4332-97f3-9609571dc739\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"503\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-b98rv.15e11e9b9685e17f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-b98rv.15e11e9b9685e17f\",\n                \"uid\": \"d74a966d-c82f-43ed-bf02-0d28684d1c69\",\n                \"resourceVersion\": \"547\",\n                \"creationTimestamp\": \"2019-12-17T09:29:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-b98rv\",\n                \"uid\": \"0a602bf8-0ee7-4332-97f3-9609571dc739\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"503\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-fw7lc.15e11e95cbb63ba4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-fw7lc.15e11e95cbb63ba4\",\n                \"uid\": \"bcbe2246-0262-46e2-9a23-155c64fb4708\",\n                \"resourceVersion\": \"368\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-fw7lc\",\n                \"uid\": \"2504d6d5-0b0e-4bec-9572-fc10e3d54e3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"360\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-fw7lc to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-fw7lc.15e11e9605c48083\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-fw7lc.15e11e9605c48083\",\n                \"uid\": \"b91430f0-ec50-4b8a-a5ef-1ece3af73298\",\n                \"resourceVersion\": \"423\",\n                \"creationTimestamp\": \"2019-12-17T09:29:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-fw7lc\",\n                \"uid\": \"2504d6d5-0b0e-4bec-9572-fc10e3d54e3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"362\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:0.5.3\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-fw7lc.15e11e962a2c4b32\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-fw7lc.15e11e962a2c4b32\",\n                \"uid\": \"969b50c5-633b-450e-a70f-9bca8a721546\",\n                \"resourceVersion\": \"425\",\n                \"creationTimestamp\": \"2019-12-17T09:29:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-fw7lc\",\n                \"uid\": \"2504d6d5-0b0e-4bec-9572-fc10e3d54e3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"362\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-fw7lc.15e11e96449e2838\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-fw7lc.15e11e96449e2838\",\n                \"uid\": \"b90ed768-8c47-46f8-a098-475bc740c58c\",\n                \"resourceVersion\": \"430\",\n                \"creationTimestamp\": \"2019-12-17T09:29:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-fw7lc\",\n                \"uid\": \"2504d6d5-0b0e-4bec-9572-fc10e3d54e3b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"362\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:35Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15e11e95ca764a63\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15e11e95ca764a63\",\n                \"uid\": \"1ebddeb8-b17a-4b03-b68c-914b597505b5\",\n                \"resourceVersion\": \"364\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"238\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-fw7lc\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15e11e9aaf050a9a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15e11e9aaf050a9a\",\n                \"uid\": \"471934fb-c236-4272-ae28-8ed027f512f7\",\n                \"resourceVersion\": \"495\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"436\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-b98rv\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15e11e9ab646a408\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15e11e9ab646a408\",\n                \"uid\": \"d3a70a4b-c569-451b-9564-a9d6e2e5c9ba\",\n                \"resourceVersion\": \"516\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"494\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-4gr5t\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15e11e91bc2c639d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15e11e91bc2c639d\",\n                \"uid\": \"d66d493c-2a07-4ab6-b1f5-257fda3081f0\",\n                \"resourceVersion\": \"205\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"67b810fe-8f5c-4e20-b77d-844590e39054\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"203\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_edbbeea2-a172-4368-8b31-816579c4d98d became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15e11e91bc2cf84c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15e11e91bc2cf84c\",\n                \"uid\": \"3c0f1731-d5c1-43a4-9314-4cffd33c06e0\",\n                \"resourceVersion\": \"206\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"ff880d17-cbea-4681-a974-b9abeae94e2d\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"204\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_edbbeea2-a172-4368-8b31-816579c4d98d became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwrhc.15e11e9ab6e44857\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-cwrhc.15e11e9ab6e44857\",\n                \"uid\": \"d632cc9e-0377-4e0b-ac85-a15df6710cdb\",\n                \"resourceVersion\": \"523\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwrhc\",\n                \"uid\": \"3dc6e5ee-88dd-4ac4-9790-c6145c41e4d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"512\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-cwrhc to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwrhc.15e11e9b0ca54091\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-cwrhc.15e11e9b0ca54091\",\n                \"uid\": \"a1e781f0-72eb-4811-8186-8b0739eab5f8\",\n                \"resourceVersion\": \"538\",\n                \"creationTimestamp\": \"2019-12-17T09:29:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwrhc\",\n                \"uid\": \"3dc6e5ee-88dd-4ac4-9790-c6145c41e4d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"518\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwrhc.15e11e9ba789fa8c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-cwrhc.15e11e9ba789fa8c\",\n                \"uid\": \"cc0be252-434f-4fb8-9b61-30651fb9a456\",\n                \"resourceVersion\": \"554\",\n                \"creationTimestamp\": \"2019-12-17T09:29:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwrhc\",\n                \"uid\": \"3dc6e5ee-88dd-4ac4-9790-c6145c41e4d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"518\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwrhc.15e11e9baf849663\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-cwrhc.15e11e9baf849663\",\n                \"uid\": \"f33ed6e6-ee89-4e8f-95c3-d430ea630be8\",\n                \"resourceVersion\": \"556\",\n                \"creationTimestamp\": \"2019-12-17T09:29:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwrhc\",\n                \"uid\": \"3dc6e5ee-88dd-4ac4-9790-c6145c41e4d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"518\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-f8mcv.15e11e95cbb76ac3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-f8mcv.15e11e95cbb76ac3\",\n                \"uid\": \"5eae2b00-65ae-4544-bb7f-10ecd9691984\",\n                \"resourceVersion\": \"371\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-f8mcv\",\n                \"uid\": \"12ba89ea-aeda-4523-92f1-63666187b35d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"361\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-f8mcv to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-f8mcv.15e11e95ea96f9f8\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-f8mcv.15e11e95ea96f9f8\",\n                \"uid\": \"458ffde8-1ef5-4621-8781-8811a3b63702\",\n                \"resourceVersion\": \"418\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-f8mcv\",\n                \"uid\": \"12ba89ea-aeda-4523-92f1-63666187b35d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"365\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-f8mcv.15e11e9628a28d9d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-f8mcv.15e11e9628a28d9d\",\n                \"uid\": \"00950f69-03ac-4458-9bd7-569c5e346818\",\n                \"resourceVersion\": \"424\",\n                \"creationTimestamp\": \"2019-12-17T09:29:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-f8mcv\",\n                \"uid\": \"12ba89ea-aeda-4523-92f1-63666187b35d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"365\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-f8mcv.15e11e963078ef8e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-f8mcv.15e11e963078ef8e\",\n                \"uid\": \"c5c1d0ec-98b8-4a6a-ac27-180813929192\",\n                \"resourceVersion\": \"426\",\n                \"creationTimestamp\": \"2019-12-17T09:29:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-f8mcv\",\n                \"uid\": \"12ba89ea-aeda-4523-92f1-63666187b35d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"365\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-h7xw6.15e11e9aafffa49a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-h7xw6.15e11e9aafffa49a\",\n                \"uid\": \"200f0a83-30f3-4311-9fcc-48fd6b96c312\",\n                \"resourceVersion\": \"500\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-h7xw6\",\n                \"uid\": \"2da2a208-3d2e-4faf-9cd0-d29e79d1f9b8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"486\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-h7xw6 to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-h7xw6.15e11e9ad313cf65\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-h7xw6.15e11e9ad313cf65\",\n                \"uid\": \"65b72ed6-46bb-449c-b57b-727acc2677a2\",\n                \"resourceVersion\": \"527\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-h7xw6\",\n                \"uid\": \"2da2a208-3d2e-4faf-9cd0-d29e79d1f9b8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"496\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-h7xw6.15e11e9b7f09773c\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-h7xw6.15e11e9b7f09773c\",\n                \"uid\": \"56c80ef5-38db-41cf-841d-e6cefa252f93\",\n                \"resourceVersion\": \"545\",\n                \"creationTimestamp\": \"2019-12-17T09:29:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-h7xw6\",\n                \"uid\": \"2da2a208-3d2e-4faf-9cd0-d29e79d1f9b8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"496\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-h7xw6.15e11e9b8b5736a6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-h7xw6.15e11e9b8b5736a6\",\n                \"uid\": \"0a1f6f37-0c51-4633-9bb8-a9cfd3cb3834\",\n                \"resourceVersion\": \"546\",\n                \"creationTimestamp\": \"2019-12-17T09:29:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-h7xw6\",\n                \"uid\": \"2da2a208-3d2e-4faf-9cd0-d29e79d1f9b8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"496\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15e11e95cae6d9bd\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15e11e95cae6d9bd\",\n                \"uid\": \"578b322b-18eb-4ac5-9e67-147113a51ddc\",\n                \"resourceVersion\": \"369\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"186\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-f8mcv\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15e11e9aaefc18bf\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15e11e9aaefc18bf\",\n                \"uid\": \"998d8e49-162a-4ee1-b169-553cc855f6b5\",\n                \"resourceVersion\": \"493\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"429\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-h7xw6\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15e11e9ab668fdc0\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15e11e9ab668fdc0\",\n                \"uid\": \"f5586c58-0dea-4dd2-8034-c0e69cb96871\",\n                \"resourceVersion\": \"520\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"498\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-cwrhc\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15e11e913fe84388\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15e11e913fe84388\",\n                \"uid\": \"4bd4b53c-1ad6-4e0d-96d4-263d1e54c487\",\n                \"resourceVersion\": \"161\",\n                \"creationTimestamp\": \"2019-12-17T09:29:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"1bf99551-e5ca-40ea-83ca-50b868423867\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"158\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_bbe25029-e924-4956-a4f4-c8f740e2126d became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:13Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15e11e913fe86e70\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15e11e913fe86e70\",\n                \"uid\": \"369caf00-09d5-4a6f-a38f-4d6e79693197\",\n                \"resourceVersion\": \"160\",\n                \"creationTimestamp\": \"2019-12-17T09:29:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"f7e80851-53bf-4ff7-9786-1b014608842f\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"159\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_bbe25029-e924-4956-a4f4-c8f740e2126d became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:13Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/replicationcontrollers\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/services\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-dns\",\n                \"uid\": \"1d94dd72-8a2f-49d9-8be7-a2bb806a6a98\",\n                \"resourceVersion\": \"182\",\n                \"creationTimestamp\": \"2019-12-17T09:29:14Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"KubeDNS\"\n                },\n                \"annotations\": {\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"10.96.0.10\",\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kindnet\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\",\n                \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                \"resourceVersion\": \"562\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-12-17T09:29:16Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"k8s-app\": \"kindnet\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"kindnet\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"kindnet\",\n                            \"k8s-app\": \"kindnet\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kindnet-cni\",\n                                \"image\": \"kindest/kindnetd:0.5.3\",\n                                \"env\": [\n                                    {\n                                        \"name\": \"HOST_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.hostIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.podIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_SUBNET\",\n                                        \"value\": \"10.244.0.0/16\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni-cfg\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_RAW\",\n                                            \"NET_ADMIN\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"kindnet\",\n                        \"serviceAccount\": \"kindnet\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ]\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\",\n                \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                \"resourceVersion\": \"564\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-proxy\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-proxy\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"configMap\": {\n                                    \"name\": \"kube-proxy\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                                \"command\": [\n                                    \"/usr/local/bin/kube-proxy\",\n                                    \"--config=/var/lib/kube-proxy/config.conf\",\n                                    \"--hostname-override=$(NODE_NAME)\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kube-proxy\",\n                                        \"mountPath\": \"/var/lib/kube-proxy\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"kube-proxy\",\n                        \"serviceAccount\": \"kube-proxy\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/coredns\",\n                \"uid\": \"886abdbc-ccaa-4f61-90ef-17f38897a9f6\",\n                \"resourceVersion\": \"675\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-12-17T09:29:14Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2019-12-17T09:30:28Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:30:28Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2019-12-17T09:30:28Z\",\n                        \"lastTransitionTime\": \"2019-12-17T09:29:33Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-6955765f44\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets\",\n        \"resourceVersion\": \"3309\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/coredns-6955765f44\",\n                \"uid\": \"79d299c6-fcf1-4e08-bc78-d59aea3b0484\",\n                \"resourceVersion\": \"674\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"886abdbc-ccaa-4f61-90ef-17f38897a9f6\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"6955765f44\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"6955765f44\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/pods\",\n        \"resourceVersion\": \"3311\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-rdtng\",\n                \"generateName\": \"coredns-6955765f44-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-6955765f44-rdtng\",\n                \"uid\": \"9ae1c8e7-996e-497f-8a71-dafc8ddf5f21\",\n                \"resourceVersion\": \"662\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-6955765f44\",\n                        \"uid\": \"79d299c6-fcf1-4e08-bc78-d59aea3b0484\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"coredns-token-crr5q\",\n                        \"secret\": {\n                            \"secretName\": \"coredns-token-crr5q\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"coredns-token-crr5q\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-control-plane\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:17Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:25Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:25Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:17Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"10.244.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"10.244.0.2\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:30:17Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:30:19Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"imageID\": \"sha256:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61\",\n                        \"containerID\": \"containerd://4977d74df5dd1bb4244b10fa620e0899a2602d111f5889fc5febbe34776c4039\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-whdtq\",\n                \"generateName\": \"coredns-6955765f44-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-6955765f44-whdtq\",\n                \"uid\": \"e7838427-55e3-46d6-b59f-da79b16daa1c\",\n                \"resourceVersion\": \"669\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-6955765f44\",\n                        \"uid\": \"79d299c6-fcf1-4e08-bc78-d59aea3b0484\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"coredns-token-crr5q\",\n                        \"secret\": {\n                            \"secretName\": \"coredns-token-crr5q\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"coredns-token-crr5q\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-control-plane\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:17Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:23Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:23Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:30:17Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"10.244.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"10.244.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:30:17Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:30:19Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"imageID\": \"sha256:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61\",\n                        \"containerID\": \"containerd://7580ecccf435ce927094c2902792af16898c0f6bece0b8c74431439d62bdc2f0\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/etcd-kind-control-plane\",\n                \"uid\": \"4b58f6f4-e4b3-4aa1-a13c-5a6c2d342264\",\n                \"resourceVersion\": \"250\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\",\n                \"labels\": {\n                    \"component\": \"etcd\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"28ba3ba0264772641c791ff01a5eecff\",\n                    \"kubernetes.io/config.mirror\": \"28ba3ba0264772641c791ff01a5eecff\",\n                    \"kubernetes.io/config.seen\": \"2019-12-17T09:29:15.099408846Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"02aad5bd-a337-4316-8440-c7d9935250c5\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"etcd-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcd-data\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n                        \"command\": [\n                            \"etcd\",\n                            \"--advertise-client-urls=https://172.17.0.3:2379\",\n                            \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n                            \"--client-cert-auth=true\",\n                            \"--data-dir=/var/lib/etcd\",\n                            \"--initial-advertise-peer-urls=https://172.17.0.3:2380\",\n                            \"--initial-cluster=kind-control-plane=https://172.17.0.3:2380\",\n                            \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n                            \"--listen-client-urls=https://127.0.0.1:2379,https://172.17.0.3:2379\",\n                            \"--listen-metrics-urls=http://127.0.0.1:2381\",\n                            \"--listen-peer-urls=https://172.17.0.3:2380\",\n                            \"--name=kind-control-plane\",\n                            \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n                            \"--peer-client-cert-auth=true\",\n                            \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n                            \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--snapshot-count=10000\",\n                            \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"etcd-data\",\n                                \"mountPath\": \"/var/lib/etcd\"\n                            },\n                            {\n                                \"name\": \"etcd-certs\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 2381,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:15Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n                        \"imageID\": \"sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f\",\n                        \"containerID\": \"containerd://6ef25bcf2d2b1f9b8bff5eb4b7746728b909f51b4a9f926f798343efc76fe6c0\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-4gr5t\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-4gr5t\",\n                \"uid\": \"1044ca91-a15c-431e-a1ed-64eba6ec2cd5\",\n                \"resourceVersion\": \"561\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"6f48886b45\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-vxrq9\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-vxrq9\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.3\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-vxrq9\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:54Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:58Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:0.5.3\",\n                        \"imageID\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n                        \"containerID\": \"containerd://4d880185d2980cb57e37c62d7bf5f2d4e709103dde97b78735f95b980b5715d3\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-b98rv\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-b98rv\",\n                \"uid\": \"0a602bf8-0ee7-4332-97f3-9609571dc739\",\n                \"resourceVersion\": \"557\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"6f48886b45\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-vxrq9\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-vxrq9\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.3\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-vxrq9\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:54Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:58Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:0.5.3\",\n                        \"imageID\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n                        \"containerID\": \"containerd://9a4f55d5014c59f4f921b4c9fd76a179f94adfd27e2924c52049b93eb421f4fe\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-fw7lc\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-fw7lc\",\n                \"uid\": \"2504d6d5-0b0e-4bec-9572-fc10e3d54e3b\",\n                \"resourceVersion\": \"435\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"6f48886b45\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-vxrq9\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-vxrq9\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.3\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-vxrq9\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:33Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:36Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:36Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:33Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:33Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:35Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:0.5.3\",\n                        \"imageID\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n                        \"containerID\": \"containerd://bd1a6c1aa6a3ffc2ea26d657cc96d339ba44fd0d556471585a56903045c9c445\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane\",\n                \"uid\": \"3cfb73fc-a021-4820-b6af-75eda5d5acb6\",\n                \"resourceVersion\": \"209\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\",\n                \"labels\": {\n                    \"component\": \"kube-apiserver\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"f54721675ae6668e1811eee66af67fe9\",\n                    \"kubernetes.io/config.mirror\": \"f54721675ae6668e1811eee66af67fe9\",\n                    \"kubernetes.io/config.seen\": \"2019-12-17T09:29:15.099416923Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"02aad5bd-a337-4316-8440-c7d9935250c5\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"command\": [\n                            \"kube-apiserver\",\n                            \"--advertise-address=172.17.0.3\",\n                            \"--allow-privileged=true\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--enable-admission-plugins=NodeRestriction\",\n                            \"--enable-bootstrap-token-auth=true\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n                            \"--etcd-servers=https://127.0.0.1:2379\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n                            \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n                            \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n                            \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n                            \"--requestheader-allowed-names=front-proxy-client\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=6443\",\n                            \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n                            \"--service-cluster-ip-range=10.96.0.0/12\",\n                            \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n                            \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"250m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 6443,\n                                \"host\": \"172.17.0.3\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:15Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"imageID\": \"sha256:5f508173f5b4c78d1378836522a12e1721542a7f2f91d71c0e868d9282dae2b0\",\n                        \"containerID\": \"containerd://42749adb4733a818bfc77aa96ff80908a3c3cec1af89b788591e8ad3beb4779f\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-controller-manager-kind-control-plane\",\n                \"uid\": \"9c181648-766c-4c84-95b0-d5fd27701a23\",\n                \"resourceVersion\": \"231\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\",\n                \"labels\": {\n                    \"component\": \"kube-controller-manager\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"51710287d419c3ab94d3868ead164180\",\n                    \"kubernetes.io/config.mirror\": \"51710287d419c3ab94d3868ead164180\",\n                    \"kubernetes.io/config.seen\": \"2019-12-17T09:29:15.099420121Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"02aad5bd-a337-4316-8440-c7d9935250c5\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"flexvolume-dir\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/controller-manager.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"command\": [\n                            \"kube-controller-manager\",\n                            \"--allocate-node-cidrs=true\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--bind-address=127.0.0.1\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-cidr=10.244.0.0/16\",\n                            \"--cluster-name=kind\",\n                            \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n                            \"--controllers=*,bootstrapsigner,tokencleaner\",\n                            \"--enable-hostpath-provisioner=true\",\n                            \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--leader-elect=true\",\n                            \"--node-cidr-mask-size=24\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n                            \"--service-cluster-ip-range=10.96.0.0/12\",\n                            \"--use-service-account-credentials=true\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"flexvolume-dir\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:15Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"imageID\": \"sha256:c46b80f6a0bff35362d173c5c10c02037955c824c6802054dfdd00173ead5ea2\",\n                        \"containerID\": \"containerd://8f517f38f907ec7b929c55fd59504cc80b7f3646fc016aae7508d27122a9759d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwrhc\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-cwrhc\",\n                \"uid\": \"3dc6e5ee-88dd-4ac4-9790-c6145c41e4d1\",\n                \"resourceVersion\": \"563\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"598f7cc7cd\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-knnnr\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-knnnr\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-knnnr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:54Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:58Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"imageID\": \"sha256:3840990bef61eeddec49b1bb488f23aab61629937349b8d0dd3bb567cf721010\",\n                        \"containerID\": \"containerd://5ec3a240658a8c49ca96aa82bc84ab5ea4298fa66cab09f3319d6571da7e96cb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-f8mcv\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-f8mcv\",\n                \"uid\": \"12ba89ea-aeda-4523-92f1-63666187b35d\",\n                \"resourceVersion\": \"428\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"598f7cc7cd\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-knnnr\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-knnnr\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-knnnr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:33Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:35Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:35Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:33Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:33Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:34Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"imageID\": \"sha256:3840990bef61eeddec49b1bb488f23aab61629937349b8d0dd3bb567cf721010\",\n                        \"containerID\": \"containerd://c6ebef2cb8987f468144ea5e602b0d8557d999770512ef6f29b6dbd717644a02\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-h7xw6\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-h7xw6\",\n                \"uid\": \"2da2a208-3d2e-4faf-9cd0-d29e79d1f9b8\",\n                \"resourceVersion\": \"559\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"598f7cc7cd\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"4081bf5a-c468-4624-986f-141f7682e044\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-knnnr\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-knnnr\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-knnnr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:58Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:54Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:54Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:57Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"imageID\": \"sha256:3840990bef61eeddec49b1bb488f23aab61629937349b8d0dd3bb567cf721010\",\n                        \"containerID\": \"containerd://c104a1f392b62e82f657abd4f7f4a41d3aa4b2bcd0f77faf0aac3b6d2032e20c\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-scheduler-kind-control-plane\",\n                \"uid\": \"892df874-27f4-48c3-bca4-a586e8919ba2\",\n                \"resourceVersion\": \"244\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\",\n                \"labels\": {\n                    \"component\": \"kube-scheduler\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"46596e0b0a1807dc2eb1f0db6ea8c704\",\n                    \"kubernetes.io/config.mirror\": \"46596e0b0a1807dc2eb1f0db6ea8c704\",\n                    \"kubernetes.io/config.seen\": \"2019-12-17T09:29:15.099422339Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"02aad5bd-a337-4316-8440-c7d9935250c5\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/scheduler.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"command\": [\n                            \"kube-scheduler\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--bind-address=127.0.0.1\",\n                            \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--leader-elect=true\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10259,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-12-17T09:29:15Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-12-17T09:29:15Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-12-17T09:29:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1812_5ad586f84e16e5\",\n                        \"imageID\": \"sha256:a5b2693af75c68ef6a36029afa43b59571621be9dcb925e2f79b2d911f7b673d\",\n                        \"containerID\": \"containerd://42420a863cb275995c302ebd9c716e29e199e677a062f0c74b05ea509f91a442\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-6955765f44-rdtng ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7\nCoreDNS-1.6.5\nlinux/amd64, go1.13.4, c2fd1b2\n==== END logs for container coredns of pod kube-system/coredns-6955765f44-rdtng ====\n==== START logs for container coredns of pod kube-system/coredns-6955765f44-whdtq ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7\nCoreDNS-1.6.5\nlinux/amd64, go1.13.4, c2fd1b2\n==== END logs for container coredns of pod kube-system/coredns-6955765f44-whdtq ====\n==== START logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2019-12-17 09:29:06.495853 I | etcdmain: etcd Version: 3.4.3\n2019-12-17 09:29:06.495920 I | etcdmain: Git SHA: 3cf2f69b5\n2019-12-17 09:29:06.495924 I | etcdmain: Go Version: go1.12.12\n2019-12-17 09:29:06.495927 I | etcdmain: Go OS/Arch: linux/amd64\n2019-12-17 09:29:06.495932 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2019-12-17 09:29:06.496588 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2019-12-17 09:29:06.497645 I | embed: name = kind-control-plane\n2019-12-17 09:29:06.497775 I | embed: data dir = /var/lib/etcd\n2019-12-17 09:29:06.497783 I | embed: member dir = /var/lib/etcd/member\n2019-12-17 09:29:06.497794 I | embed: heartbeat = 100ms\n2019-12-17 09:29:06.497799 I | embed: election = 1000ms\n2019-12-17 09:29:06.497804 I | embed: snapshot count = 10000\n2019-12-17 09:29:06.497816 I | embed: advertise client URLs = https://172.17.0.3:2379\n2019-12-17 09:29:06.508512 I | etcdserver: starting member b273bc7741bcb020 in cluster 86482fea2286a1d2\nraft2019/12/17 09:29:06 INFO: b273bc7741bcb020 switched to configuration voters=()\nraft2019/12/17 09:29:06 INFO: b273bc7741bcb020 became follower at term 0\nraft2019/12/17 09:29:06 INFO: newRaft b273bc7741bcb020 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2019/12/17 09:29:06 INFO: b273bc7741bcb020 became follower at term 1\nraft2019/12/17 09:29:06 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)\n2019-12-17 09:29:06.518738 W | auth: simple token is not cryptographically signed\n2019-12-17 09:29:06.524214 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]\n2019-12-17 09:29:06.529765 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2019-12-17 09:29:06.530107 I | embed: listening for metrics on http://127.0.0.1:2381\n2019-12-17 09:29:06.530175 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10)\n2019-12-17 09:29:06.530253 I | embed: listening for peers on 172.17.0.3:2380\nraft2019/12/17 09:29:06 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056)\n2019-12-17 09:29:06.531031 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2\nraft2019/12/17 09:29:07 INFO: b273bc7741bcb020 is starting a new election at term 1\nraft2019/12/17 09:29:07 INFO: b273bc7741bcb020 became candidate at term 2\nraft2019/12/17 09:29:07 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 2\nraft2019/12/17 09:29:07 INFO: b273bc7741bcb020 became leader at term 2\nraft2019/12/17 09:29:07 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 2\n2019-12-17 09:29:07.214433 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.3:2379]} to cluster 86482fea2286a1d2\n2019-12-17 09:29:07.215019 I | etcdserver: setting up the initial cluster version to 3.4\n2019-12-17 09:29:07.216027 I | embed: ready to serve client requests\n2019-12-17 09:29:07.216666 N | etcdserver/membership: set the initial cluster version to 3.4\n2019-12-17 09:29:07.216778 I | etcdserver/api: enabled capabilities for version 3.4\n2019-12-17 09:29:07.217398 I | embed: ready to serve client requests\n2019-12-17 09:29:07.222400 I | embed: serving client requests on 127.0.0.1:2379\n2019-12-17 09:29:07.225944 I | embed: serving client requests on 172.17.0.3:2379\n2019-12-17 09:29:56.608907 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" limit:500 \" with result \"range_response_count:3 size:6781\" took too long (106.857372ms) to execute\n2019-12-17 09:30:21.359531 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (140.194518ms) to execute\n2019-12-17 09:30:22.575293 W | etcdserver: read-only range request \"key:\\\"/registry/pods/local-path-storage/local-path-provisioner-7745554f7f-jktcl\\\" \" with result \"range_response_count:1 size:1865\" took too long (843.073989ms) to execute\n2019-12-17 09:30:22.723018 W | etcdserver: request \"header:<ID:12691265878139655574 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-node-lease/kind-worker\\\" mod_revision:595 > success:<request_put:<key:\\\"/registry/leases/kube-node-lease/kind-worker\\\" value_size:233 >> failure:<request_range:<key:\\\"/registry/leases/kube-node-lease/kind-worker\\\" > >>\" with result \"size:16\" took too long (211.422006ms) to execute\n2019-12-17 09:30:22.794308 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (240.87989ms) to execute\n2019-12-17 09:30:23.530732 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:291\" took too long (476.991804ms) to execute\n2019-12-17 09:30:23.610526 W | etcdserver: request \"header:<ID:12691265878139655580 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" mod_revision:639 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" value_size:234 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" > >>\" with result \"size:16\" took too long (347.137546ms) to execute\n2019-12-17 09:30:24.257431 W | etcdserver: read-only range request \"key:\\\"/registry/events\\\" range_end:\\\"/registry/eventt\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (378.843711ms) to execute\n2019-12-17 09:30:24.258170 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:489\" took too long (1.186336142s) to execute\n2019-12-17 09:30:24.258672 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:172\" took too long (418.370324ms) to execute\n2019-12-17 09:30:24.298261 W | etcdserver: request \"header:<ID:12691265878139655584 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-scheduler\\\" mod_revision:637 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" value_size:225 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" > >>\" with result \"size:16\" took too long (222.645885ms) to execute\n2019-12-17 09:30:24.486636 W | etcdserver: read-only range request \"key:\\\"/registry/networkpolicies\\\" range_end:\\\"/registry/networkpoliciet\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (321.055785ms) to execute\n2019-12-17 09:30:24.585439 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/default/kubernetes\\\" \" with result \"range_response_count:1 size:293\" took too long (267.032675ms) to execute\n2019-12-17 09:30:27.137963 W | wal: sync duration of 1.00558743s, expected less than 1s\n2019-12-17 09:30:27.272259 W | etcdserver: read-only range request \"key:\\\"/registry/apiextensions.k8s.io/customresourcedefinitions\\\" range_end:\\\"/registry/apiextensions.k8s.io/customresourcedefinitiont\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (1.221872003s) to execute\n2019-12-17 09:30:27.375439 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:458\" took too long (1.245475328s) to execute\n2019-12-17 09:30:27.657249 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kube-system/coredns-6955765f44-rdtng\\\" \" with result \"range_response_count:1 size:1849\" took too long (1.37868928s) to execute\n2019-12-17 09:30:27.657984 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:440\" took too long (610.818212ms) to execute\n2019-12-17 09:30:27.658388 W | etcdserver: read-only range request \"key:\\\"/registry/clusterrolebindings\\\" range_end:\\\"/registry/clusterrolebindingt\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (1.238869103s) to execute\n2019-12-17 09:30:28.037329 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:5\" took too long (316.042948ms) to execute\n2019-12-17 09:30:28.067771 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/\\\" range_end:\\\"/registry/jobs0\\\" limit:500 \" with result \"range_response_count:0 size:5\" took too long (345.59247ms) to execute\n2019-12-17 09:30:28.128496 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:291\" took too long (404.866313ms) to execute\n2019-12-17 09:30:28.252143 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (528.537876ms) to execute\n2019-12-17 09:30:28.404138 W | etcdserver: request \"header:<ID:12691265878139655606 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/replicasets/kube-system/coredns-6955765f44\\\" mod_revision:398 > success:<request_put:<key:\\\"/registry/replicasets/kube-system/coredns-6955765f44\\\" value_size:1208 >> failure:<request_range:<key:\\\"/registry/replicasets/kube-system/coredns-6955765f44\\\" > >>\" with result \"size:16\" took too long (148.826586ms) to execute\n2019-12-17 09:30:28.407323 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/\\\" range_end:\\\"/registry/cronjobs0\\\" limit:500 \" with result \"range_response_count:0 size:5\" took too long (103.106585ms) to execute\n2019-12-17 09:31:08.292862 W | etcdserver: request \"header:<ID:12691265878139657280 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/events/deployment-1554/webserver-595b5b9587.15e11eabe065eb1a\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/events/deployment-1554/webserver-595b5b9587.15e11eabe065eb1a\\\" value_size:404 lease:3467893841284879683 >> failure:<>>\" with result \"size:16\" took too long (165.181904ms) to execute\n2019-12-17 09:31:08.588499 W | etcdserver: read-only range request \"key:\\\"/registry/pods/port-forwarding-3739/pfpod\\\" \" with result \"range_response_count:1 size:1645\" took too long (106.627017ms) to execute\n2019-12-17 09:31:09.616327 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-5312/\\\" range_end:\\\"/registry/resourcequotas/resourcequota-53120\\\" \" with result \"range_response_count:0 size:5\" took too long (104.777626ms) to execute\n2019-12-17 09:31:09.620907 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-snppg\\\" \" with result \"range_response_count:1 size:1054\" took too long (108.355523ms) to execute\n2019-12-17 09:31:10.912453 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb\\\" \" with result \"range_response_count:1 size:1433\" took too long (111.263083ms) to execute\n2019-12-17 09:31:10.948373 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker\\\" \" with result \"range_response_count:1 size:1991\" took too long (201.855295ms) to execute\n2019-12-17 09:31:10.948586 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-9305/hostexec-kind-worker-p5bcz\\\" \" with result \"range_response_count:1 size:1214\" took too long (148.679805ms) to execute\n2019-12-17 09:31:11.287567 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-9hn8q\\\" \" with result \"range_response_count:1 size:1492\" took too long (177.749429ms) to execute\n2019-12-17 09:31:11.319055 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:458\" took too long (116.543611ms) to execute\n2019-12-17 09:31:11.319466 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-2780/hostexec-kind-worker2-8ctnp\\\" \" with result \"range_response_count:1 size:1184\" took too long (164.351804ms) to execute\n2019-12-17 09:31:11.326336 W | etcdserver: read-only range request \"key:\\\"/registry/pods/security-context-6518/security-context-e9398e36-2b13-4ce8-8f57-592394ecbaa5\\\" \" with result \"range_response_count:1 size:1448\" took too long (171.237095ms) to execute\n2019-12-17 09:31:11.667383 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/security-context-6518\\\" \" with result \"range_response_count:1 size:287\" took too long (106.92101ms) to execute\n2019-12-17 09:31:11.905711 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:290\" took too long (274.334988ms) to execute\n2019-12-17 09:31:11.948069 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/deployment-1554/webserver\\\" \" with result \"range_response_count:1 size:783\" took too long (123.359794ms) to execute\n2019-12-17 09:31:11.948726 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/kube-system\\\" \" with result \"range_response_count:1 size:178\" took too long (122.957545ms) to execute\n2019-12-17 09:31:12.669605 W | etcdserver: request \"header:<ID:12691265878139657554 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/container-lifecycle-hook-9198/pod-handle-http-request\\\" mod_revision:1251 > success:<request_put:<key:\\\"/registry/pods/container-lifecycle-hook-9198/pod-handle-http-request\\\" value_size:1156 >> failure:<request_range:<key:\\\"/registry/pods/container-lifecycle-hook-9198/pod-handle-http-request\\\" > >>\" with result \"size:16\" took too long (532.009226ms) to execute\n2019-12-17 09:31:12.673672 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges\\\" range_end:\\\"/registry/limitranget\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (633.231105ms) to execute\n2019-12-17 09:31:12.674210 W | etcdserver: read-only range request \"key:\\\"/registry/csinodes\\\" range_end:\\\"/registry/csinodet\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (689.604551ms) to execute\n2019-12-17 09:31:12.674652 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/hostpath-6897/\\\" range_end:\\\"/registry/resourcequotas/hostpath-68970\\\" \" with result \"range_response_count:0 size:5\" took too long (690.107507ms) to execute\n2019-12-17 09:31:12.675465 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/kube-node-lease\\\" \" with result \"range_response_count:1 size:187\" took too long (690.964176ms) to execute\n2019-12-17 09:31:12.675655 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/deployment-1554/webserver-79fbcb94c6\\\" \" with result \"range_response_count:1 size:844\" took too long (696.261845ms) to execute\n2019-12-17 09:31:12.675911 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb\\\" \" with result \"range_response_count:1 size:1433\" took too long (697.80287ms) to execute\n2019-12-17 09:31:12.692574 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4807/\\\" range_end:\\\"/registry/pods/kubectl-48070\\\" \" with result \"range_response_count:1 size:1257\" took too long (320.624452ms) to execute\n2019-12-17 09:31:12.694711 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-6946/hostexec-kind-worker2-xf6rt\\\" \" with result \"range_response_count:1 size:1184\" took too long (251.534736ms) to execute\n2019-12-17 09:31:12.696598 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-635/hostexec-kind-worker-ztrjf\\\" \" with result \"range_response_count:1 size:1177\" took too long (126.369846ms) to execute\n2019-12-17 09:31:12.696941 W | etcdserver: read-only range request \"key:\\\"/registry/pods/downward-api-4478/downwardapi-volume-21850c50-74e0-42d6-848b-659e39400f4b\\\" \" with result \"range_response_count:1 size:1544\" took too long (334.739356ms) to execute\n2019-12-17 09:31:12.697642 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-5312/\\\" range_end:\\\"/registry/resourcequotas/resourcequota-53120\\\" \" with result \"range_response_count:0 size:5\" took too long (190.827623ms) to execute\n2019-12-17 09:31:12.698246 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-4nlqf\\\" \" with result \"range_response_count:1 size:1221\" took too long (634.347381ms) to execute\n2019-12-17 09:31:12.698392 W | etcdserver: read-only range request \"key:\\\"/registry/pods/port-forwarding-3739/pfpod\\\" \" with result \"range_response_count:1 size:1993\" took too long (216.567716ms) to execute\n2019-12-17 09:31:12.698710 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-2780/hostexec-kind-worker2-8ctnp\\\" \" with result \"range_response_count:1 size:1244\" took too long (283.372378ms) to execute\n2019-12-17 09:31:13.016555 W | etcdserver: request \"header:<ID:12691265878139657561 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/controllers/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\\\" mod_revision:1537 > success:<request_put:<key:\\\"/registry/controllers/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\\\" value_size:606 >> failure:<request_range:<key:\\\"/registry/controllers/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\\\" > >>\" with result \"size:16\" took too long (207.228222ms) to execute\n2019-12-17 09:31:13.029906 W | etcdserver: read-only range request \"key:\\\"/registry/events/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb.15e11eab536296ab\\\" \" with result \"range_response_count:1 size:603\" took too long (337.467439ms) to execute\n2019-12-17 09:31:13.031636 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-9305/hostexec-kind-worker-p5bcz\\\" \" with result \"range_response_count:1 size:1214\" took too long (248.824952ms) to execute\n2019-12-17 09:31:13.045808 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-9305/hostexec-kind-worker-p5bcz\\\" \" with result \"range_response_count:1 size:1214\" took too long (355.903835ms) to execute\n2019-12-17 09:31:13.046639 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-3864/pod-subpath-test-inlinevolume-x2mx\\\" \" with result \"range_response_count:1 size:2579\" took too long (190.459041ms) to execute\n2019-12-17 09:31:13.050799 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/hostpath-6897/default\\\" \" with result \"range_response_count:1 size:185\" took too long (323.843025ms) to execute\n2019-12-17 09:31:13.051308 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-2780/hostexec-kind-worker2-8ctnp\\\" \" with result \"range_response_count:1 size:1244\" took too long (313.673189ms) to execute\n2019-12-17 09:31:13.051628 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-fxnx8\\\" \" with result \"range_response_count:1 size:891\" took too long (325.561037ms) to execute\n2019-12-17 09:31:13.052344 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" \" with result \"range_response_count:1 size:1817\" took too long (328.760622ms) to execute\n2019-12-17 09:31:13.278441 W | etcdserver: request \"header:<ID:12691265878139657573 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/resourcequotas/resourcequota-5312/test-quota\\\" mod_revision:1584 > success:<request_put:<key:\\\"/registry/resourcequotas/resourcequota-5312/test-quota\\\" value_size:1810 >> failure:<request_range:<key:\\\"/registry/resourcequotas/resourcequota-5312/test-quota\\\" > >>\" with result \"size:16\" took too long (127.174087ms) to execute\n2019-12-17 09:31:13.283196 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" \" with result \"range_response_count:1 size:1817\" took too long (225.022786ms) to execute\n2019-12-17 09:31:13.283940 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker2\\\" \" with result \"range_response_count:1 size:1995\" took too long (228.69934ms) to execute\n2019-12-17 09:31:14.233686 W | etcdserver: request \"header:<ID:12691265878139657586 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-ckjbc\\\" mod_revision:1591 > success:<request_put:<key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-ckjbc\\\" value_size:811 >> failure:<request_range:<key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-ckjbc\\\" > >>\" with result \"size:16\" took too long (833.766493ms) to execute\n2019-12-17 09:31:14.235110 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-t4qt4\\\" \" with result \"range_response_count:1 size:1492\" took too long (926.426074ms) to execute\n2019-12-17 09:31:14.235565 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-79fbcb94c6-jt9s7\\\" \" with result \"range_response_count:1 size:895\" took too long (920.156893ms) to execute\n2019-12-17 09:31:14.253384 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:458\" took too long (660.715027ms) to execute\n2019-12-17 09:31:14.254123 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/watch-4767\\\" \" with result \"range_response_count:1 size:269\" took too long (278.189895ms) to execute\n2019-12-17 09:31:14.255229 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker\\\" \" with result \"range_response_count:1 size:1991\" took too long (939.078006ms) to execute\n2019-12-17 09:31:14.255441 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-2780/hostexec-kind-worker2-8ctnp\\\" \" with result \"range_response_count:1 size:1244\" took too long (474.00802ms) to execute\n2019-12-17 09:31:14.255571 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-lifecycle-hook-9198/pod-handle-http-request\\\" \" with result \"range_response_count:1 size:1247\" took too long (283.865287ms) to execute\n2019-12-17 09:31:14.256311 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-2067/pod-fe816982-ee47-4b11-a6c1-363104a50308\\\" \" with result \"range_response_count:1 size:1314\" took too long (303.355899ms) to execute\n2019-12-17 09:31:14.256654 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb\\\" \" with result \"range_response_count:1 size:1433\" took too long (554.201179ms) to execute\n2019-12-17 09:31:14.257073 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-7871/pod-projected-secrets-d4be6995-3836-484e-a6a5-9ff6eebebc92\\\" \" with result \"range_response_count:1 size:1548\" took too long (852.231843ms) to execute\n2019-12-17 09:31:14.258486 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4807/\\\" range_end:\\\"/registry/pods/kubectl-48070\\\" \" with result \"range_response_count:1 size:1257\" took too long (912.78701ms) to execute\n2019-12-17 09:31:14.266679 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:172\" took too long (761.800601ms) to execute\n2019-12-17 09:31:14.273640 W | etcdserver: read-only range request \"key:\\\"/registry/mutatingwebhookconfigurations\\\" range_end:\\\"/registry/mutatingwebhookconfigurationt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (135.314497ms) to execute\n2019-12-17 09:31:14.275185 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-9cqpg\\\" \" with result \"range_response_count:1 size:891\" took too long (959.687048ms) to execute\n2019-12-17 09:31:14.275437 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces\\\" range_end:\\\"/registry/namespacet\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (231.929996ms) to execute\n2019-12-17 09:31:14.282425 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:440\" took too long (303.489426ms) to execute\n2019-12-17 09:31:14.549529 W | etcdserver: request \"header:<ID:12691265878139657598 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/deployment-1554/webserver-79fbcb94c6-jt9s7\\\" mod_revision:1414 > success:<request_put:<key:\\\"/registry/pods/deployment-1554/webserver-79fbcb94c6-jt9s7\\\" value_size:1146 >> failure:<request_range:<key:\\\"/registry/pods/deployment-1554/webserver-79fbcb94c6-jt9s7\\\" > >>\" with result \"size:16\" took too long (186.475725ms) to execute\n2019-12-17 09:31:14.590331 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:308\" took too long (318.388841ms) to execute\n2019-12-17 09:31:14.839117 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker2\\\" \" with result \"range_response_count:1 size:2522\" took too long (567.011209ms) to execute\n2019-12-17 09:31:14.839848 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-635/hostexec-kind-worker-ztrjf\\\" \" with result \"range_response_count:1 size:1177\" took too long (272.405955ms) to execute\n2019-12-17 09:31:14.840496 W | etcdserver: read-only range request \"key:\\\"/registry/pods/port-forwarding-3739/pfpod\\\" \" with result \"range_response_count:1 size:1993\" took too long (362.737507ms) to execute\n2019-12-17 09:31:14.848346 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" \" with result \"range_response_count:1 size:1817\" took too long (498.675395ms) to execute\n2019-12-17 09:31:14.855906 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-6946/hostexec-kind-worker2-xf6rt\\\" \" with result \"range_response_count:1 size:1184\" took too long (401.345488ms) to execute\n2019-12-17 09:31:14.856603 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/hostpath-6897/\\\" range_end:\\\"/registry/limitranges/hostpath-68970\\\" \" with result \"range_response_count:0 size:5\" took too long (495.524424ms) to execute\n2019-12-17 09:31:14.857201 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4807/\\\" range_end:\\\"/registry/pods/kubectl-48070\\\" \" with result \"range_response_count:1 size:1257\" took too long (520.837388ms) to execute\n2019-12-17 09:31:14.861253 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-lifecycle-hook-9198/pod-handle-http-request\\\" \" with result \"range_response_count:1 size:1247\" took too long (557.140861ms) to execute\n2019-12-17 09:31:14.861476 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/watch-4767/\\\" range_end:\\\"/registry/resourcequotas/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (565.724292ms) to execute\n2019-12-17 09:31:14.861715 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:291\" took too long (552.541536ms) to execute\n2019-12-17 09:31:14.870302 W | etcdserver: request \"header:<ID:12691265878139657610 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-qtv6r\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-qtv6r\\\" value_size:759 >> failure:<>>\" with result \"size:16\" took too long (153.153927ms) to execute\n2019-12-17 09:31:14.874996 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/default/kubernetes\\\" \" with result \"range_response_count:1 size:293\" took too long (563.969224ms) to execute\n2019-12-17 09:31:14.907387 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb\\\" \" with result \"range_response_count:1 size:1433\" took too long (292.237065ms) to execute\n2019-12-17 09:31:14.907682 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-zpqhp\\\" \" with result \"range_response_count:1 size:1492\" took too long (292.553703ms) to execute\n2019-12-17 09:31:14.909343 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-9cqpg\\\" \" with result \"range_response_count:1 size:903\" took too long (294.720979ms) to execute\n2019-12-17 09:31:14.909912 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volume-8426/hostexec-kind-worker-n6nmj\\\" \" with result \"range_response_count:1 size:1167\" took too long (295.202284ms) to execute\n2019-12-17 09:31:15.204593 W | etcdserver: request \"header:<ID:12691265878139657619 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-qtv6r\\\" mod_revision:1620 > success:<request_put:<key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-qtv6r\\\" value_size:812 >> failure:<request_range:<key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-qtv6r\\\" > >>\" with result \"size:16\" took too long (113.956641ms) to execute\n2019-12-17 09:31:15.206663 W | etcdserver: read-only range request \"key:\\\"/registry/pods/downward-api-4478/downwardapi-volume-21850c50-74e0-42d6-848b-659e39400f4b\\\" \" with result \"range_response_count:1 size:1544\" took too long (500.34149ms) to execute\n2019-12-17 09:31:15.207071 W | etcdserver: read-only range request \"key:\\\"/registry/masterleases/172.17.0.3\\\" \" with result \"range_response_count:1 size:129\" took too long (304.719029ms) to execute\n2019-12-17 09:31:15.207578 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-9305/hostexec-kind-worker-p5bcz\\\" \" with result \"range_response_count:1 size:1274\" took too long (425.857243ms) to execute\n2019-12-17 09:31:15.208693 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/watch-4767/\\\" range_end:\\\"/registry/resourcequotas/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (305.862363ms) to execute\n2019-12-17 09:31:15.545859 W | etcdserver: request \"header:<ID:12691265878139657631 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/persistentvolumeclaims/provisioning-2780/pvc-w89kd\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/persistentvolumeclaims/provisioning-2780/pvc-w89kd\\\" value_size:238 >> failure:<>>\" with result \"size:16\" took too long (196.270648ms) to execute\n2019-12-17 09:31:15.563654 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/resourcequota-5312/test-quota\\\" \" with result \"range_response_count:1 size:1887\" took too long (486.338266ms) to execute\n2019-12-17 09:31:15.565833 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-control-plane\\\" \" with result \"range_response_count:1 size:2077\" took too long (453.219461ms) to execute\n2019-12-17 09:31:15.566315 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-3864/pod-subpath-test-inlinevolume-x2mx\\\" \" with result \"range_response_count:1 size:2579\" took too long (499.701056ms) to execute\n2019-12-17 09:31:15.568770 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/cronjob-373/replace\\\" \" with result \"range_response_count:1 size:481\" took too long (566.507207ms) to execute\n2019-12-17 09:31:15.569339 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:308\" took too long (617.495689ms) to execute\n2019-12-17 09:31:15.569769 W | etcdserver: read-only range request \"key:\\\"/registry/pods/hostpath-6897/pod-host-path-test\\\" \" with result \"range_response_count:1 size:1084\" took too long (624.379976ms) to execute\n2019-12-17 09:31:15.570298 W | etcdserver: read-only range request \"key:\\\"/registry/events/pv-9509/pod-ephm-test-projected-vdvd.15e11ea9c7b1209f\\\" \" with result \"range_response_count:1 size:508\" took too long (634.356522ms) to execute\n2019-12-17 09:31:15.922728 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-lifecycle-hook-9198/pod-with-poststart-http-hook\\\" \" with result \"range_response_count:1 size:821\" took too long (683.351522ms) to execute\n2019-12-17 09:31:15.923303 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4807/\\\" range_end:\\\"/registry/pods/kubectl-48070\\\" \" with result \"range_response_count:1 size:1257\" took too long (604.118345ms) to execute\n2019-12-17 09:31:15.923775 W | etcdserver: read-only range request \"key:\\\"/registry/events/watch-4767/\\\" range_end:\\\"/registry/events/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (705.72819ms) to execute\n2019-12-17 09:31:15.923990 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" \" with result \"range_response_count:1 size:1829\" took too long (684.609515ms) to execute\n2019-12-17 09:31:15.924245 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:291\" took too long (705.423042ms) to execute\n2019-12-17 09:31:15.924344 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:489\" took too long (614.020361ms) to execute\n2019-12-17 09:31:15.924918 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-9305/hostexec-kind-worker-p5bcz\\\" \" with result \"range_response_count:1 size:1274\" took too long (672.720505ms) to execute\n2019-12-17 09:31:15.925571 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb\\\" \" with result \"range_response_count:1 size:1929\" took too long (648.159946ms) to execute\n2019-12-17 09:31:15.934461 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-fxnx8\\\" \" with result \"range_response_count:1 size:1222\" took too long (685.844232ms) to execute\n2019-12-17 09:31:15.934961 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-79fbcb94c6-5jj5g\\\" \" with result \"range_response_count:0 size:5\" took too long (686.02702ms) to execute\n2019-12-17 09:31:15.935322 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-7fsdg\\\" \" with result \"range_response_count:1 size:1493\" took too long (694.435534ms) to execute\n2019-12-17 09:31:15.967194 W | etcdserver: request \"header:<ID:12691265878139657643 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/masterleases/172.17.0.3\\\" mod_revision:1219 > success:<request_put:<key:\\\"/registry/masterleases/172.17.0.3\\\" value_size:65 lease:3467893841284881827 >> failure:<request_range:<key:\\\"/registry/masterleases/172.17.0.3\\\" > >>\" with result \"size:16\" took too long (239.523028ms) to execute\n2019-12-17 09:31:16.276270 W | etcdserver: read-only range request \"key:\\\"/registry/events/init-container-6513/pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f.15e11eab83a3bc03\\\" \" with result \"range_response_count:1 size:549\" took too long (682.217704ms) to execute\n2019-12-17 09:31:16.277431 W | etcdserver: request \"header:<ID:12691265878139657654 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" mod_revision:1631 > success:<request_delete_range:<key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" > > failure:<request_range:<key:\\\"/registry/pods/projected-3161/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20\\\" > >>\" with result \"size:18\" took too long (194.491656ms) to execute\n2019-12-17 09:31:16.307491 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-7332/externalsvc-swdz6\\\" \" with result \"range_response_count:1 size:1230\" took too long (341.064533ms) to execute\n2019-12-17 09:31:16.312523 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker\\\" \" with result \"range_response_count:1 size:2518\" took too long (373.238531ms) to execute\n2019-12-17 09:31:16.322662 W | etcdserver: read-only range request \"key:\\\"/registry/events/watch-4767/\\\" range_end:\\\"/registry/events/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (375.523422ms) to execute\n2019-12-17 09:31:16.357261 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/provisioning-2780/pvc-w89kd\\\" \" with result \"range_response_count:1 size:321\" took too long (326.821162ms) to execute\n2019-12-17 09:31:16.357817 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/default/kubernetes\\\" \" with result \"range_response_count:1 size:213\" took too long (329.278622ms) to execute\n2019-12-17 09:31:16.362999 W | etcdserver: read-only range request \"key:\\\"/registry/events/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb.15e11eab69b54282\\\" \" with result \"range_response_count:1 size:603\" took too long (332.343111ms) to execute\n2019-12-17 09:31:16.586537 W | etcdserver: read-only range request \"key:\\\"/registry/networkpolicies/watch-4767/\\\" range_end:\\\"/registry/networkpolicies/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (256.017689ms) to execute\n2019-12-17 09:31:16.587924 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-2067/pod-fe816982-ee47-4b11-a6c1-363104a50308\\\" \" with result \"range_response_count:1 size:1314\" took too long (270.085457ms) to execute\n2019-12-17 09:31:16.588209 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-fxnx8\\\" \" with result \"range_response_count:1 size:1234\" took too long (258.760178ms) to execute\n2019-12-17 09:31:16.588377 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-3161/\\\" range_end:\\\"/registry/pods/projected-31610\\\" \" with result \"range_response_count:0 size:5\" took too long (256.738201ms) to execute\n2019-12-17 09:31:16.590541 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-7871/pod-projected-secrets-d4be6995-3836-484e-a6a5-9ff6eebebc92\\\" \" with result \"range_response_count:1 size:1548\" took too long (270.474427ms) to execute\n2019-12-17 09:31:16.590720 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-thk8l\\\" \" with result \"range_response_count:1 size:1493\" took too long (256.301479ms) to execute\n2019-12-17 09:31:16.591045 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4807/\\\" range_end:\\\"/registry/pods/kubectl-48070\\\" \" with result \"range_response_count:1 size:1257\" took too long (256.801355ms) to execute\n2019-12-17 09:31:16.592165 W | etcdserver: read-only range request \"key:\\\"/registry/pods/port-forwarding-3739/pfpod\\\" \" with result \"range_response_count:1 size:1993\" took too long (114.374257ms) to execute\n2019-12-17 09:31:16.592744 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:7103\" took too long (143.07318ms) to execute\n2019-12-17 09:31:16.595780 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-6946/hostexec-kind-worker2-xf6rt\\\" \" with result \"range_response_count:1 size:1184\" took too long (148.712534ms) to execute\n2019-12-17 09:31:16.603405 W | etcdserver: read-only range request \"key:\\\"/registry/masterleases/\\\" range_end:\\\"/registry/masterleases0\\\" \" with result \"range_response_count:1 size:129\" took too long (224.32631ms) to execute\n2019-12-17 09:31:16.603981 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-635/hostexec-kind-worker-ztrjf\\\" \" with result \"range_response_count:1 size:1177\" took too long (231.418987ms) to execute\n2019-12-17 09:31:16.751697 W | etcdserver: read-only range request \"key:\\\"/registry/events/init-container-6513/pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f.15e11eaba19b9da4\\\" \" with result \"range_response_count:1 size:549\" took too long (112.998829ms) to execute\n2019-12-17 09:31:16.752825 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/projected-3161\\\" \" with result \"range_response_count:1 size:266\" took too long (112.128185ms) to execute\n2019-12-17 09:31:16.892103 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/watch-4767/\\\" range_end:\\\"/registry/ingress/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (126.815221ms) to execute\n2019-12-17 09:31:16.893420 W | etcdserver: read-only range request \"key:\\\"/registry/events/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb.15e11eabc031ba57\\\" \" with result \"range_response_count:1 size:605\" took too long (124.71264ms) to execute\n2019-12-17 09:31:16.899333 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-pjnt5\\\" \" with result \"range_response_count:1 size:1233\" took too long (140.718289ms) to execute\n2019-12-17 09:31:17.161180 W | etcdserver: read-only range request \"key:\\\"/registry/pods/init-container-6513/pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f\\\" \" with result \"range_response_count:1 size:1720\" took too long (246.302485ms) to execute\n2019-12-17 09:31:17.161469 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/downward-api-8407\\\" \" with result \"range_response_count:0 size:5\" took too long (252.6649ms) to execute\n2019-12-17 09:31:17.165497 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/watch-4767/\\\" range_end:\\\"/registry/podtemplates/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (264.226752ms) to execute\n2019-12-17 09:31:17.177709 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/cronjob-373/replace\\\" \" with result \"range_response_count:1 size:481\" took too long (172.869618ms) to execute\n2019-12-17 09:31:17.178305 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-8558/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb\\\" \" with result \"range_response_count:1 size:1929\" took too long (230.31688ms) to execute\n2019-12-17 09:31:17.179034 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/security-context-6518\\\" \" with result \"range_response_count:1 size:302\" took too long (246.655876ms) to execute\n2019-12-17 09:31:17.447846 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/watch-4767/\\\" range_end:\\\"/registry/podtemplates/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (274.584279ms) to execute\n2019-12-17 09:31:17.448490 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-b5dd7476d-9cqpg\\\" \" with result \"range_response_count:0 size:5\" took too long (276.664788ms) to execute\n2019-12-17 09:31:17.448735 W | etcdserver: read-only range request \"key:\\\"/registry/events/init-container-6513/pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f.15e11eac15ab473a\\\" \" with result \"range_response_count:1 size:563\" took too long (282.023779ms) to execute\n2019-12-17 09:31:17.449588 W | etcdserver: request \"header:<ID:12691265878139657727 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/deployments/deployment-1554/webserver\\\" mod_revision:1706 > success:<request_put:<key:\\\"/registry/deployments/deployment-1554/webserver\\\" value_size:713 >> failure:<request_range:<key:\\\"/registry/deployments/deployment-1554/webserver\\\" > >>\" with result \"size:16\" took too long (174.662089ms) to execute\n2019-12-17 09:31:17.450495 W | etcdserver: read-only range request \"key:\\\"/registry/health\\\" \" with result \"range_response_count:0 size:5\" took too long (186.467964ms) to execute\n2019-12-17 09:31:17.450737 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/downward-api-8407/\\\" range_end:\\\"/registry/resourcequotas/downward-api-84070\\\" \" with result \"range_response_count:0 size:5\" took too long (246.152121ms) to execute\n2019-12-17 09:31:17.450927 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4807/\\\" range_end:\\\"/registry/pods/kubectl-48070\\\" \" with result \"range_response_count:1 size:1257\" took too long (131.620874ms) to execute\n2019-12-17 09:31:17.451103 W | etcdserver: read-only range request \"key:\\\"/registry/pods/downward-api-4478/downwardapi-volume-21850c50-74e0-42d6-848b-659e39400f4b\\\" \" with result \"range_response_count:1 size:1544\" took too long (212.426792ms) to execute\n2019-12-17 09:31:17.451380 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker2\\\" \" with result \"range_response_count:1 size:2522\" took too long (238.134376ms) to execute\n2019-12-17 09:31:17.451484 W | etcdserver: read-only range request \"key:\\\"/registry/pods/deployment-1554/webserver-79fbcb94c6-d42gq\\\" \" with result \"range_response_count:1 size:894\" took too long (237.931147ms) to execute\n2019-12-17 09:31:17.451584 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/security-context-6518/\\\" range_end:\\\"/registry/limitranges/security-context-65180\\\" \" with result \"range_response_count:0 size:5\" took too long (215.103911ms) to execute\n2019-12-17 09:31:17.641718 W | etcdserver: read-only range request \"key:\\\"/registry/pods/projected-7871/pod-projected-secrets-d4be6995-3836-484e-a6a5-9ff6eebebc92\\\" \" with result \"range_response_count:1 size:1548\" took too long (179.489728ms) to execute\n2019-12-17 09:31:17.641790 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/downward-api-8407/default\\\" \" with result \"range_response_count:1 size:193\" took too long (160.567506ms) to execute\n2019-12-17 09:31:17.642167 W | etcdserver: read-only range request \"key:\\\"/registry/leases/watch-4767/\\\" range_end:\\\"/registry/leases/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (180.159481ms) to execute\n2019-12-17 09:31:17.642666 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/security-context-6518/\\\" range_end:\\\"/registry/persistentvolumeclaims/security-context-65180\\\" \" with result \"range_response_count:0 size:5\" took too long (145.836761ms) to execute\n2019-12-17 09:31:17.642744 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-6kdkd\\\" \" with result \"range_response_count:1 size:1492\" took too long (146.217939ms) to execute\n2019-12-17 09:31:17.643008 W | etcdserver: read-only range request \"key:\\\"/registry/events/pv-3058/pod-ephm-test-projected-gnsb.15e11eaa9a3d3ba4\\\" \" with result \"range_response_count:1 size:508\" took too long (148.211721ms) to execute\n2019-12-17 09:31:17.823473 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubelet-424/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-vntcj\\\" \" with result \"range_response_count:1 size:1493\" took too long (156.9411ms) to execute\n2019-12-17 09:31:17.824401 W | etcdserver: read-only range request \"key:\\\"/registry/rolebindings/watch-4767/\\\" range_end:\\\"/registry/rolebindings/watch-47670\\\" \" with result \"range_response_count:0 size:5\" took too long (148.787565ms) to execute\n2019-12-17 09:31:17.824670 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/security-context-6518/\\\" range_end:\\\"/registry/persistentvolumeclaims/security-context-65180\\\" \" with result \"range_response_count:0 size:5\" took too long (143.717094ms) to execute\n2019-12-17 09:31:19.492062 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/watch-4767\\\" \" with result \"range_response_count:1 size:860\" took too long (113.908089ms) to execute\n2019-12-17 09:31:19.728447 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/security-context-6518/\\\" range_end:\\\"/registry/deployments/security-context-65180\\\" \" with result \"range_response_count:0 size:5\" took too long (134.986943ms) to execute\n2019-12-17 09:31:33.191544 W | etcdserver: read-only range request \"key:\\\"/registry/pods/local-path-storage/create-pvc-882c2e4c-59dd-4d40-8011-ff3ad9289695\\\" \" with result \"range_response_count:1 size:1190\" took too long (104.738268ms) to execute\n2019-12-17 09:31:35.460242 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volume-8426/local-injector\\\" \" with result \"range_response_count:1 size:818\" took too long (125.497811ms) to execute\n==== END logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-4gr5t ====\nI1217 09:29:58.346826       1 main.go:64] hostIP = 172.17.0.4\npodIP = 172.17.0.4\nI1217 09:30:28.448250       1 main.go:104] Failed to get nodes, retrying after error: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout\nI1217 09:30:28.488813       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:28.488839       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:28.539104       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} \nI1217 09:30:28.539414       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:28.539559       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:28.539803       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} \nI1217 09:30:28.539990       1 main.go:150] handling current node\nI1217 09:30:38.552373       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:38.552405       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:38.552517       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:38.552525       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:38.552576       1 main.go:150] handling current node\nI1217 09:30:48.646722       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:48.646749       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:48.646849       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:48.646862       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:48.739342       1 main.go:150] handling current node\nI1217 09:30:58.787736       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:58.787769       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:58.839444       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:58.839475       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:58.842927       1 main.go:150] handling current node\nI1217 09:31:08.942257       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:08.942287       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:08.942523       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:08.942531       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:08.942886       1 main.go:150] handling current node\nI1217 09:31:18.982439       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:18.982470       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:18.982685       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:18.982700       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:18.982819       1 main.go:150] handling current node\nI1217 09:31:29.039766       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:29.039795       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:29.040055       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:29.040339       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:29.040568       1 main.go:150] handling current node\nI1217 09:31:39.048182       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:39.048211       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:39.048372       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:39.048377       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:39.048451       1 main.go:150] handling current node\nI1217 09:31:49.140191       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:49.140319       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:49.140594       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:49.140672       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:49.140900       1 main.go:150] handling current node\n==== END logs for container kindnet-cni of pod kube-system/kindnet-4gr5t ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-b98rv ====\nI1217 09:29:58.340223       1 main.go:64] hostIP = 172.17.0.2\npodIP = 172.17.0.2\nI1217 09:30:28.352775       1 main.go:104] Failed to get nodes, retrying after error: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout\nI1217 09:30:28.440941       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:28.441278       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:28.441750       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} \nI1217 09:30:28.442071       1 main.go:150] handling current node\nI1217 09:30:28.446707       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:28.447346       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:28.448405       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} \nI1217 09:30:38.540695       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:38.540720       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:38.540822       1 main.go:150] handling current node\nI1217 09:30:38.540834       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:38.540839       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:48.641573       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:48.641802       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:48.641963       1 main.go:150] handling current node\nI1217 09:30:48.641981       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:48.641987       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:58.695703       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:30:58.695737       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:30:58.695859       1 main.go:150] handling current node\nI1217 09:30:58.739151       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:58.739263       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:08.745185       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:08.745215       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:08.745426       1 main.go:150] handling current node\nI1217 09:31:08.745441       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:08.745447       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:18.784200       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:18.784231       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:18.784469       1 main.go:150] handling current node\nI1217 09:31:18.784484       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:18.784490       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:28.793160       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:28.793186       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:28.793398       1 main.go:150] handling current node\nI1217 09:31:28.793410       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:28.793415       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:38.839924       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:38.839951       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:38.840195       1 main.go:150] handling current node\nI1217 09:31:38.840209       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:38.840215       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:48.845283       1 main.go:161] Handling node with IP: 172.17.0.3\nI1217 09:31:48.845314       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI1217 09:31:48.845509       1 main.go:150] handling current node\nI1217 09:31:48.845527       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:48.845543       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-b98rv ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-fw7lc ====\nI1217 09:29:35.540957       1 main.go:64] hostIP = 172.17.0.3\npodIP = 172.17.0.3\nI1217 09:30:05.545547       1 main.go:104] Failed to get nodes, retrying after error: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout\nI1217 09:30:05.641468       1 main.go:150] handling current node\nI1217 09:30:05.645938       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:05.645955       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:05.646081       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} \nI1217 09:30:05.646121       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:05.646124       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:05.646185       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} \nI1217 09:30:15.655220       1 main.go:150] handling current node\nI1217 09:30:15.655266       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:15.655274       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:15.655400       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:15.655563       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:26.099244       1 main.go:150] handling current node\nI1217 09:30:26.099278       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:26.099286       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:26.099392       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:26.099398       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:36.240529       1 main.go:150] handling current node\nI1217 09:30:36.240663       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:36.240671       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:36.240807       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:36.240815       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:46.339562       1 main.go:150] handling current node\nI1217 09:30:46.339791       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:46.339797       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:46.339950       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:46.340045       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:30:56.345679       1 main.go:150] handling current node\nI1217 09:30:56.345711       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:30:56.345716       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:30:56.345820       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:30:56.345825       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:06.440952       1 main.go:150] handling current node\nI1217 09:31:06.441018       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:06.441026       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:06.441217       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:06.441252       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:16.605661       1 main.go:150] handling current node\nI1217 09:31:16.605697       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:16.605705       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:16.605848       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:16.605855       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:26.617154       1 main.go:150] handling current node\nI1217 09:31:26.617252       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:26.617261       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:26.617374       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:26.617392       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:36.640650       1 main.go:150] handling current node\nI1217 09:31:36.640689       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:36.640697       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:36.640800       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:36.640816       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \nI1217 09:31:46.739985       1 main.go:150] handling current node\nI1217 09:31:46.740019       1 main.go:161] Handling node with IP: 172.17.0.2\nI1217 09:31:46.740026       1 main.go:162] Node kind-worker has CIDR 10.244.1.0/24 \nI1217 09:31:46.740132       1 main.go:161] Handling node with IP: 172.17.0.4\nI1217 09:31:46.740137       1 main.go:162] Node kind-worker2 has CIDR 10.244.2.0/24 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-fw7lc ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\nFlag --insecure-port has been deprecated, This flag will be removed in a future version.\nI1217 09:29:06.405274       1 server.go:596] external host was not specified, using 172.17.0.3\nI1217 09:29:06.405606       1 server.go:150] Version: v1.18.0-alpha.0.1812+5ad586f84e16e5\nI1217 09:29:07.079084       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI1217 09:29:07.079112       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI1217 09:29:07.080481       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI1217 09:29:07.080505       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI1217 09:29:07.083615       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.083675       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.261231       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.261266       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.281220       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.281269       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.353393       1 master.go:264] Using reconciler: lease\nI1217 09:29:07.354684       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.354956       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.369401       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.369459       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.392097       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.393581       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.408069       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.413390       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.427500       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.427544       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.439478       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.439526       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.455696       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.455756       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.477553       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.477600       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.490899       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.491198       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.503639       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.503680       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.518562       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.518938       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.533406       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.533481       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.558240       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.558580       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.575930       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.576455       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.598790       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.598883       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.621623       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.621666       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.647540       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.647591       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.661069       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.661113       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.676301       1 rest.go:113] the default service ipfamily for this cluster is: IPv4\nI1217 09:29:07.864350       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.864399       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.878539       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.879006       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.893567       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.893829       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.912350       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.912472       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.936269       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.936847       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.953167       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.953225       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.964409       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.964452       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.977440       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.977487       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:07.993381       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:07.993468       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.008677       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.008720       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.024760       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.024804       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.038206       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.038247       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.053062       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.053104       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.078628       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.078674       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.082393       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.082422       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.097102       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.097683       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.109467       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.109639       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.124530       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.124777       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.142468       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.142697       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.160429       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.160734       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.177346       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.177477       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.191552       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.191680       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.205262       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.205296       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.223297       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.225093       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.246582       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.247285       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.261110       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.261421       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.273337       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.273457       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.307769       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.307841       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.324690       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.324743       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.339434       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.339780       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.366255       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.366320       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.382096       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.382286       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.392644       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.392682       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.407219       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.407276       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.419776       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.419815       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.432792       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.432835       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.449021       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.449195       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.477984       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.478277       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.496614       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.496765       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.509852       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.509926       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.522013       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.522181       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.535935       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.536251       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.547929       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.548050       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nW1217 09:29:08.748231       1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.\nW1217 09:29:08.759497       1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.\nW1217 09:29:08.773662       1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.\nW1217 09:29:08.799849       1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.\nW1217 09:29:08.804838       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.\nW1217 09:29:08.833176       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.\nW1217 09:29:08.876949       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.\nW1217 09:29:08.876991       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.\nI1217 09:29:08.898577       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI1217 09:29:08.898613       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI1217 09:29:08.901172       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.901215       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:08.913497       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:29:08.913535       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:29:11.539305       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI1217 09:29:11.539375       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI1217 09:29:11.539605       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key\nI1217 09:29:11.539766       1 secure_serving.go:178] Serving securely on [::]:6443\nI1217 09:29:11.539960       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nI1217 09:29:11.540424       1 controller.go:81] Starting OpenAPI AggregationController\nI1217 09:29:11.540580       1 available_controller.go:386] Starting AvailableConditionController\nI1217 09:29:11.540588       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller\nI1217 09:29:11.540661       1 crd_finalizer.go:264] Starting CRDFinalizer\nI1217 09:29:11.540750       1 controller.go:86] Starting OpenAPI controller\nI1217 09:29:11.540773       1 customresource_discovery_controller.go:209] Starting DiscoveryController\nI1217 09:29:11.540795       1 naming_controller.go:289] Starting NamingConditionController\nI1217 09:29:11.540818       1 establishing_controller.go:74] Starting EstablishingController\nI1217 09:29:11.540840       1 nonstructuralschema_controller.go:185] Starting NonStructuralSchemaConditionController\nI1217 09:29:11.540857       1 apiapproval_controller.go:184] Starting KubernetesAPIApprovalPolicyConformantConditionController\nI1217 09:29:11.540922       1 autoregister_controller.go:140] Starting autoregister controller\nI1217 09:29:11.540933       1 cache.go:32] Waiting for caches to sync for autoregister controller\nI1217 09:29:11.541556       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller\nI1217 09:29:11.541580       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller\nI1217 09:29:11.541605       1 crdregistration_controller.go:111] Starting crd-autoregister controller\nI1217 09:29:11.541611       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister\nI1217 09:29:11.541647       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI1217 09:29:11.541672       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nE1217 09:29:11.543009       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: \nI1217 09:29:11.544004       1 apiservice_controller.go:94] Starting APIServiceRegistrationController\nI1217 09:29:11.544321       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller\nI1217 09:29:11.641294       1 cache.go:39] Caches are synced for autoregister controller\nI1217 09:29:11.642283       1 cache.go:39] Caches are synced for AvailableConditionController controller\nI1217 09:29:11.642425       1 shared_informer.go:204] Caches are synced for crd-autoregister \nI1217 09:29:11.642449       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller \nI1217 09:29:11.644910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller\nI1217 09:29:12.539155       1 controller.go:107] OpenAPI AggregationController: Processing item \nI1217 09:29:12.539405       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).\nI1217 09:29:12.539423       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).\nI1217 09:29:12.549218       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000\nI1217 09:29:12.556109       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000\nI1217 09:29:12.556406       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.\nI1217 09:29:13.117618       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI1217 09:29:13.177207       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io\nW1217 09:29:13.381240       1 lease.go:224] Resetting endpoints for master service \"kubernetes\" to [172.17.0.3]\nI1217 09:29:13.383231       1 controller.go:606] quota admission added evaluator for: endpoints\nI1217 09:29:13.743640       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io\nI1217 09:29:14.954420       1 controller.go:606] quota admission added evaluator for: serviceaccounts\nI1217 09:29:14.979276       1 controller.go:606] quota admission added evaluator for: deployments.apps\nI1217 09:29:15.113593       1 controller.go:606] quota admission added evaluator for: daemonsets.apps\nI1217 09:29:33.236536       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps\nI1217 09:29:33.416014       1 controller.go:606] quota admission added evaluator for: replicasets.apps\nI1217 09:30:21.281648       1 trace.go:116] Trace[1465693319]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:30:20.769939876 +0000 UTC m=+74.505326454) (total time: 511.659656ms):\nTrace[1465693319]: [511.496815ms] [500.648626ms] Transaction committed\nI1217 09:30:21.294945       1 trace.go:116] Trace[2049113252]: \"Patch\" url:/api/v1/namespaces/kube-system/pods/coredns-6955765f44-rdtng/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.3 (started: 2019-12-17 09:30:20.76975666 +0000 UTC m=+74.505143235) (total time: 525.133548ms):\nTrace[2049113252]: [524.848444ms] [514.275939ms] Object stored in database\nI1217 09:30:22.328317       1 trace.go:116] Trace[861128203]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:20.956592059 +0000 UTC m=+74.691978626) (total time: 1.371635907s):\nTrace[861128203]: [1.371512s] [1.371495361s] About to write a response\nI1217 09:30:22.500733       1 trace.go:116] Trace[739461053]: \"GuaranteedUpdate etcd3\" type:*core.Endpoints (started: 2019-12-17 09:30:21.404245182 +0000 UTC m=+75.139631793) (total time: 1.096422439s):\nTrace[739461053]: [1.096351513s] [1.096056416s] Transaction committed\nI1217 09:30:22.501371       1 trace.go:116] Trace[1634196426]: \"Update\" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:21.394776478 +0000 UTC m=+75.130163050) (total time: 1.106533822s):\nTrace[1634196426]: [1.106411061s] [1.106330378s] Object stored in database\nI1217 09:30:22.503340       1 trace.go:116] Trace[813111049]: \"GuaranteedUpdate etcd3\" type:*core.Endpoints (started: 2019-12-17 09:30:21.393115762 +0000 UTC m=+75.128502338) (total time: 1.110192335s):\nTrace[813111049]: [1.109015037s] [1.108380852s] Transaction committed\nI1217 09:30:22.555234       1 trace.go:116] Trace[666269163]: \"Update\" url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:endpoint-controller,client:172.17.0.3 (started: 2019-12-17 09:30:21.392861174 +0000 UTC m=+75.128247743) (total time: 1.162318757s):\nTrace[666269163]: [1.162213642s] [1.162035541s] Object stored in database\nI1217 09:30:22.710368       1 trace.go:116] Trace[736596185]: \"Get\" url:/api/v1/namespaces/local-path-storage/pods/local-path-provisioner-7745554f7f-jktcl,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.3 (started: 2019-12-17 09:30:21.584259973 +0000 UTC m=+75.319646574) (total time: 1.126045838s):\nTrace[736596185]: [1.125940634s] [1.125922086s] About to write a response\nI1217 09:30:22.809230       1 trace.go:116] Trace[1515480180]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-12-17 09:30:22.218022369 +0000 UTC m=+75.953408947) (total time: 591.140873ms):\nTrace[1515480180]: [591.11224ms] [590.834878ms] Transaction committed\nI1217 09:30:22.809718       1 trace.go:116] Trace[793864269]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-worker,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:30:22.217803164 +0000 UTC m=+75.953189738) (total time: 591.851624ms):\nTrace[793864269]: [591.750365ms] [591.594732ms] Object stored in database\nI1217 09:30:23.816660       1 trace.go:116] Trace[288630573]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-12-17 09:30:23.020432551 +0000 UTC m=+76.755819128) (total time: 796.186517ms):\nTrace[288630573]: [796.165095ms] [795.918784ms] Transaction committed\nI1217 09:30:23.816798       1 trace.go:116] Trace[1302225229]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:23.020274968 +0000 UTC m=+76.755661544) (total time: 796.496116ms):\nTrace[1302225229]: [796.419317ms] [796.311813ms] Object stored in database\nI1217 09:30:23.824334       1 trace.go:116] Trace[497854221]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:23.018346942 +0000 UTC m=+76.753733547) (total time: 805.940314ms):\nTrace[497854221]: [805.888725ms] [805.855295ms] About to write a response\nI1217 09:30:24.775244       1 trace.go:116] Trace[923007140]: \"GuaranteedUpdate etcd3\" type:*coordination.Lease (started: 2019-12-17 09:30:23.841962461 +0000 UTC m=+77.577349070) (total time: 933.231739ms):\nTrace[923007140]: [933.178134ms] [922.594022ms] Transaction committed\nI1217 09:30:24.775391       1 trace.go:116] Trace[417346123]: \"Update\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:23.841772231 +0000 UTC m=+77.577158801) (total time: 933.588881ms):\nTrace[417346123]: [933.505914ms] [933.375937ms] Object stored in database\nI1217 09:30:24.780746       1 trace.go:116] Trace[1825665794]: \"Get\" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:30:24.272449799 +0000 UTC m=+78.007836368) (total time: 508.24578ms):\nTrace[1825665794]: [508.172355ms] [508.161708ms] About to write a response\nI1217 09:30:25.155121       1 trace.go:116] Trace[1965226030]: \"Get\" url:/api/v1/namespaces/local-path-storage/endpoints/rancher.io-local-path,user-agent:local-path-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.244.0.3 (started: 2019-12-17 09:30:23.026587773 +0000 UTC m=+76.761974343) (total time: 2.128478612s):\nTrace[1965226030]: [2.128363937s] [2.128355543s] About to write a response\nI1217 09:30:25.532598       1 trace.go:116] Trace[187398790]: \"Get\" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:30:24.917201309 +0000 UTC m=+78.652587899) (total time: 615.347115ms):\nTrace[187398790]: [615.294954ms] [615.281932ms] About to write a response\nI1217 09:30:26.098278       1 trace.go:116] Trace[244711740]: \"List etcd3\" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2019-12-17 09:30:25.540671415 +0000 UTC m=+79.276057993) (total time: 557.567089ms):\nTrace[244711740]: [557.567089ms] [557.567089ms] END\nI1217 09:30:26.145481       1 trace.go:116] Trace[1671283091]: \"GuaranteedUpdate etcd3\" type:*core.Endpoints (started: 2019-12-17 09:30:25.604014781 +0000 UTC m=+79.339401354) (total time: 541.425463ms):\nTrace[1671283091]: [541.39541ms] [540.999952ms] Transaction committed\nI1217 09:30:26.145636       1 trace.go:116] Trace[1696070863]: \"Update\" url:/api/v1/namespaces/local-path-storage/endpoints/rancher.io-local-path,user-agent:local-path-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.244.0.3 (started: 2019-12-17 09:30:25.603668815 +0000 UTC m=+79.339055382) (total time: 541.938197ms):\nTrace[1696070863]: [541.841263ms] [541.576236ms] Object stored in database\nI1217 09:30:26.253130       1 trace.go:116] Trace[1283246665]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:30:22.745487675 +0000 UTC m=+76.480874255) (total time: 3.507596491s):\nTrace[1283246665]: [3.507419723s] [3.506150724s] Transaction committed\nI1217 09:30:26.253530       1 trace.go:116] Trace[1884863271]: \"Patch\" url:/api/v1/namespaces/local-path-storage/pods/local-path-provisioner-7745554f7f-jktcl/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.3 (started: 2019-12-17 09:30:22.74531296 +0000 UTC m=+76.480699537) (total time: 3.508182651s):\nTrace[1884863271]: [3.507952956s] [3.506787463s] Object stored in database\nI1217 09:30:27.596484       1 trace.go:116] Trace[1223926736]: \"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:25.864034609 +0000 UTC m=+79.599421181) (total time: 1.732391395s):\nTrace[1223926736]: [1.732324789s] [1.732272887s] About to write a response\nI1217 09:30:27.600119       1 trace.go:116] Trace[1528264199]: \"GuaranteedUpdate etcd3\" type:*apps.ReplicaSet (started: 2019-12-17 09:30:26.148687798 +0000 UTC m=+79.884074376) (total time: 1.451395591s):\nTrace[1528264199]: [1.451287121s] [1.450863413s] Transaction committed\nI1217 09:30:27.600274       1 trace.go:116] Trace[1117116570]: \"Update\" url:/apis/apps/v1/namespaces/local-path-storage/replicasets/local-path-provisioner-7745554f7f/status,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.3 (started: 2019-12-17 09:30:26.148460616 +0000 UTC m=+79.883847185) (total time: 1.451786245s):\nTrace[1117116570]: [1.451686387s] [1.45154688s] Object stored in database\nI1217 09:30:27.682687       1 trace.go:116] Trace[252924110]: \"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:26.797413006 +0000 UTC m=+80.532799581) (total time: 885.229251ms):\nTrace[252924110]: [885.186155ms] [885.148598ms] About to write a response\nI1217 09:30:27.698535       1 trace.go:116] Trace[2057268087]: \"Get\" url:/api/v1/namespaces/kube-system/pods/coredns-6955765f44-rdtng,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.3 (started: 2019-12-17 09:30:26.261075714 +0000 UTC m=+79.996462282) (total time: 1.437406437s):\nTrace[2057268087]: [1.437303472s] [1.437291363s] About to write a response\nI1217 09:30:28.284416       1 trace.go:116] Trace[340439474]: \"List etcd3\" key:/jobs,resourceVersion:,limit:500,continue: (started: 2019-12-17 09:30:27.626444671 +0000 UTC m=+81.361831240) (total time: 657.892827ms):\nTrace[340439474]: [657.892827ms] [657.892827ms] END\nI1217 09:30:28.284930       1 trace.go:116] Trace[1766477981]: \"List\" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:cronjob-controller,client:172.17.0.3 (started: 2019-12-17 09:30:27.626407808 +0000 UTC m=+81.361794378) (total time: 658.44892ms):\nTrace[1766477981]: [658.320988ms] [658.294501ms] Listing from storage done\nI1217 09:30:28.310822       1 trace.go:116] Trace[1043972989]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:30:27.707413484 +0000 UTC m=+81.442800058) (total time: 603.359179ms):\nTrace[1043972989]: [603.220126ms] [600.810524ms] Transaction committed\nI1217 09:30:28.312081       1 trace.go:116] Trace[414458432]: \"Patch\" url:/api/v1/namespaces/kube-system/pods/coredns-6955765f44-rdtng/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.3 (started: 2019-12-17 09:30:27.707253023 +0000 UTC m=+81.442639590) (total time: 604.788692ms):\nTrace[414458432]: [604.524344ms] [602.224356ms] Object stored in database\nI1217 09:30:28.335144       1 trace.go:116] Trace[1859819650]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:27.61253677 +0000 UTC m=+81.347923328) (total time: 722.542463ms):\nTrace[1859819650]: [722.450915ms] [722.421311ms] About to write a response\nI1217 09:30:28.335206       1 trace.go:116] Trace[2135553537]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:30:27.697501861 +0000 UTC m=+81.432888436) (total time: 637.681251ms):\nTrace[2135553537]: [637.647314ms] [637.616003ms] About to write a response\nI1217 09:31:00.992115       1 controller.go:606] quota admission added evaluator for: cronjobs.batch\nI1217 09:31:04.026457       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:31:04.026612       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:31:11.400906       1 trace.go:116] Trace[228066939]: \"Delete\" url:/api/v1/namespaces/security-context-6518/pods/security-context-e9398e36-2b13-4ce8-8f57-592394ecbaa5,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly],client:172.17.0.1 (started: 2019-12-17 09:31:10.898630448 +0000 UTC m=+124.634017018) (total time: 502.20344ms):\nTrace[228066939]: [502.134853ms] [502.083301ms] Object deleted from database\nI1217 09:31:12.671545       1 trace.go:116] Trace[934590011]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:31:12.030976536 +0000 UTC m=+125.766363115) (total time: 640.520377ms):\nTrace[934590011]: [640.480343ms] [639.047244ms] Transaction committed\nI1217 09:31:12.671623       1 trace.go:116] Trace[432135742]: \"Create\" url:/api/v1/namespaces/projected-7871/events,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:12.044074407 +0000 UTC m=+125.779460991) (total time: 627.505982ms):\nTrace[432135742]: [627.438222ms] [627.226646ms] Object stored in database\nI1217 09:31:12.671803       1 trace.go:116] Trace[404634022]: \"Patch\" url:/api/v1/namespaces/container-lifecycle-hook-9198/pods/pod-handle-http-request/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:31:12.030384459 +0000 UTC m=+125.765771032) (total time: 641.390323ms):\nTrace[404634022]: [641.22397ms] [639.932004ms] Object stored in database\nI1217 09:31:12.675334       1 trace.go:116] Trace[1378721540]: \"List etcd3\" key:/resourcequotas/hostpath-6897,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:11.983811463 +0000 UTC m=+125.719198157) (total time: 691.486078ms):\nTrace[1378721540]: [691.486078ms] [691.486078ms] END\nI1217 09:31:12.679991       1 trace.go:116] Trace[338195955]: \"List\" url:/api/v1/namespaces/hostpath-6897/resourcequotas,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:31:11.983794197 +0000 UTC m=+125.719180768) (total time: 696.168428ms):\nTrace[338195955]: [696.114645ms] [696.106561ms] Listing from storage done\nI1217 09:31:12.679820       1 trace.go:116] Trace[991144031]: \"Get\" url:/apis/apps/v1/namespaces/deployment-1554/replicasets/webserver-79fbcb94c6,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.3 (started: 2019-12-17 09:31:11.974070178 +0000 UTC m=+125.709456744) (total time: 705.708846ms):\nTrace[991144031]: [705.642358ms] [705.630718ms] About to write a response\nI1217 09:31:12.684435       1 trace.go:116] Trace[928764188]: \"Get\" url:/api/v1/namespaces/kube-node-lease,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:31:11.983795068 +0000 UTC m=+125.719181640) (total time: 698.802603ms):\nTrace[928764188]: [698.748178ms] [698.73794ms] About to write a response\nI1217 09:31:12.696558       1 trace.go:116] Trace[1203040529]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:31:12.104831775 +0000 UTC m=+125.840218354) (total time: 591.689396ms):\nTrace[1203040529]: [591.608359ms] [590.63868ms] Transaction committed\nI1217 09:31:12.696633       1 trace.go:116] Trace[1173144467]: \"Get\" url:/api/v1/namespaces/container-runtime-8558/pods/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:11.958751244 +0000 UTC m=+125.694137811) (total time: 737.854603ms):\nTrace[1173144467]: [737.769335ms] [737.762113ms] About to write a response\nI1217 09:31:12.697400       1 trace.go:116] Trace[881563958]: \"Patch\" url:/api/v1/namespaces/kubelet-424/pods/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-bkmcp/status,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:12.103520273 +0000 UTC m=+125.838906837) (total time: 593.850717ms):\nTrace[881563958]: [593.709186ms] [591.672606ms] Object stored in database\nI1217 09:31:12.700008       1 trace.go:116] Trace[239256409]: \"Create\" url:/api/v1/namespaces/hostpath-6897/serviceaccounts,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:service-account-controller,client:172.17.0.3 (started: 2019-12-17 09:31:11.982689019 +0000 UTC m=+125.718075589) (total time: 717.284283ms):\nTrace[239256409]: [717.245796ms] [717.086245ms] Object stored in database\nI1217 09:31:13.284757       1 trace.go:116] Trace[535584065]: \"GuaranteedUpdate etcd3\" type:*core.Event (started: 2019-12-17 09:31:12.688591785 +0000 UTC m=+126.423978479) (total time: 596.126945ms):\nTrace[535584065]: [349.426493ms] [349.426493ms] initial value restored\nTrace[535584065]: [596.105812ms] [246.31397ms] Transaction committed\nI1217 09:31:13.284904       1 trace.go:116] Trace[600746836]: \"Patch\" url:/api/v1/namespaces/container-runtime-8558/events/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb.15e11eab536296ab,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:12.688471978 +0000 UTC m=+126.423858559) (total time: 596.37605ms):\nTrace[600746836]: [349.548949ms] [349.502354ms] About to apply patch\nTrace[600746836]: [596.32018ms] [246.577879ms] Object stored in database\nI1217 09:31:13.303291       1 trace.go:116] Trace[755139183]: \"Delete\" url:/api/v1/namespaces/deployment-1554/pods/webserver-b5dd7476d-4nlqf,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-apps] Deployment iterative rollouts should eventually progress,client:172.17.0.1 (started: 2019-12-17 09:31:12.05877004 +0000 UTC m=+125.794156615) (total time: 1.244472783s):\nTrace[755139183]: [1.244406199s] [1.2443812s] Object deleted from database\nI1217 09:31:14.237066       1 trace.go:116] Trace[1045781270]: \"Get\" url:/api/v1/namespaces/deployment-1554/pods/webserver-79fbcb94c6-jt9s7,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:13.312146588 +0000 UTC m=+127.047533145) (total time: 924.87644ms):\nTrace[1045781270]: [924.814797ms] [924.807397ms] About to write a response\nI1217 09:31:14.237461       1 trace.go:116] Trace[468514854]: \"GuaranteedUpdate etcd3\" type:*core.ServiceAccount (started: 2019-12-17 09:31:13.303031619 +0000 UTC m=+127.038418200) (total time: 934.395039ms):\nTrace[468514854]: [934.367188ms] [934.13222ms] Transaction committed\nI1217 09:31:14.239815       1 trace.go:116] Trace[346391413]: \"Update\" url:/api/v1/namespaces/hostpath-6897/serviceaccounts/default,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/tokens-controller,client:172.17.0.3 (started: 2019-12-17 09:31:13.302911054 +0000 UTC m=+127.038297615) (total time: 936.870766ms):\nTrace[346391413]: [936.808479ms] [936.738422ms] Object stored in database\nI1217 09:31:14.238478       1 trace.go:116] Trace[186193464]: \"Create\" url:/api/v1/namespaces/deployment-1554/events,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.3 (started: 2019-12-17 09:31:13.305953443 +0000 UTC m=+127.041340020) (total time: 932.486458ms):\nTrace[186193464]: [932.434752ms] [931.812148ms] Object stored in database\nI1217 09:31:14.254665       1 trace.go:116] Trace[1975486932]: \"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:31:13.592012615 +0000 UTC m=+127.327399182) (total time: 662.61452ms):\nTrace[1975486932]: [662.569324ms] [662.530271ms] About to write a response\nI1217 09:31:14.272181       1 trace.go:116] Trace[310255924]: \"Get\" url:/api/v1/namespaces/container-runtime-8558/pods/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:13.701796715 +0000 UTC m=+127.437183283) (total time: 570.331431ms):\nTrace[310255924]: [570.248182ms] [570.233625ms] About to write a response\nI1217 09:31:14.273341       1 trace.go:116] Trace[2085799154]: \"GuaranteedUpdate etcd3\" type:*apps.ReplicaSet (started: 2019-12-17 09:31:13.30617851 +0000 UTC m=+127.041565116) (total time: 967.097863ms):\nTrace[2085799154]: [967.047306ms] [966.678276ms] Transaction committed\nI1217 09:31:14.273549       1 trace.go:116] Trace[1335710194]: \"Update\" url:/apis/apps/v1/namespaces/deployment-1554/replicasets/webserver-79fbcb94c6/status,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.3 (started: 2019-12-17 09:31:13.305348597 +0000 UTC m=+127.040735171) (total time: 968.13298ms):\nTrace[1335710194]: [968.022911ms] [967.275057ms] Object stored in database\nI1217 09:31:14.273617       1 trace.go:116] Trace[748213770]: \"Get\" url:/api/v1/namespaces/projected-7871/pods/pod-projected-secrets-d4be6995-3836-484e-a6a5-9ff6eebebc92,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:13.404299985 +0000 UTC m=+127.139686552) (total time: 869.281419ms):\nTrace[748213770]: [869.227046ms] [869.214017ms] About to write a response\nI1217 09:31:14.273897       1 trace.go:116] Trace[1165377284]: \"GuaranteedUpdate etcd3\" type:*apps.ReplicaSet (started: 2019-12-17 09:31:13.306090635 +0000 UTC m=+127.041477213) (total time: 967.745146ms):\nTrace[1165377284]: [967.703758ms] [967.400824ms] Transaction committed\nI1217 09:31:14.274057       1 trace.go:116] Trace[842005962]: \"Update\" url:/apis/apps/v1/namespaces/deployment-1554/replicasets/webserver-b5dd7476d/status,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.3 (started: 2019-12-17 09:31:13.305803809 +0000 UTC m=+127.041190380) (total time: 968.201011ms):\nTrace[842005962]: [968.115156ms] [967.897257ms] Object stored in database\nI1217 09:31:14.275610       1 trace.go:116] Trace[1909058781]: \"Create\" url:/api/v1/namespaces/projected-7871/events,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:13.315127207 +0000 UTC m=+127.050513776) (total time: 960.457809ms):\nTrace[1909058781]: [960.419273ms] [960.307383ms] Object stored in database\nI1217 09:31:14.275891       1 trace.go:116] Trace[98024507]: \"Create\" url:/api/v1/namespaces/provisioning-3864/events,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:31:13.313478189 +0000 UTC m=+127.048864760) (total time: 962.364057ms):\nTrace[98024507]: [962.325913ms] [962.218409ms] Object stored in database\nI1217 09:31:14.276114       1 trace.go:116] Trace[544336545]: \"Get\" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:31:13.504340267 +0000 UTC m=+127.239726833) (total time: 771.742839ms):\nTrace[544336545]: [771.703698ms] [771.69064ms] About to write a response\nI1217 09:31:14.280127       1 trace.go:116] Trace[1801447277]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:31:13.309443705 +0000 UTC m=+127.044830286) (total time: 970.656545ms):\nTrace[1801447277]: [970.597494ms] [970.448754ms] Transaction committed\nI1217 09:31:14.280355       1 trace.go:116] Trace[890176225]: \"Create\" url:/api/v1/namespaces/deployment-1554/pods/webserver-79fbcb94c6-s8rxp/binding,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/scheduler,client:172.17.0.3 (started: 2019-12-17 09:31:13.309141781 +0000 UTC m=+127.044528359) (total time: 971.188813ms):\nTrace[890176225]: [971.150175ms] [970.967745ms] Object stored in database\nI1217 09:31:14.285991       1 trace.go:116] Trace[1821874517]: \"GuaranteedUpdate etcd3\" type:*core.Pod (started: 2019-12-17 09:31:13.290187818 +0000 UTC m=+127.025574392) (total time: 995.77536ms):\nTrace[1821874517]: [995.730564ms] [995.618388ms] Transaction committed\nI1217 09:31:14.286144       1 trace.go:116] Trace[933021134]: \"Create\" url:/api/v1/namespaces/deployment-1554/pods/webserver-b5dd7476d-ckjbc/binding,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/scheduler,client:172.17.0.3 (started: 2019-12-17 09:31:13.2899186 +0000 UTC m=+127.025305158) (total time: 996.137633ms):\nTrace[933021134]: [996.108132ms] [995.9412ms] Object stored in database\nI1217 09:31:14.286198       1 trace.go:116] Trace[1942807998]: \"List etcd3\" key:/pods/kubectl-4807,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:13.318495613 +0000 UTC m=+127.053882226) (total time: 967.672288ms):\nTrace[1942807998]: [967.672288ms] [967.672288ms] END\nI1217 09:31:14.286416       1 trace.go:116] Trace[61588064]: \"List\" url:/api/v1/namespaces/kubectl-4807/pods,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:13.318452312 +0000 UTC m=+127.053838882) (total time: 967.935305ms):\nTrace[61588064]: [967.776234ms] [967.740011ms] Listing from storage done\nI1217 09:31:14.288276       1 trace.go:116] Trace[1478219428]: \"Get\" url:/api/v1/namespaces/kubelet-424/pods/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-t4qt4,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:31:13.308131059 +0000 UTC m=+127.043517643) (total time: 980.107436ms):\nTrace[1478219428]: [977.987906ms] [977.979386ms] About to write a response\nI1217 09:31:14.312856       1 trace.go:116] Trace[1049753301]: \"Get\" url:/api/v1/namespaces/projected-3161/pods/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:13.055606131 +0000 UTC m=+126.790992702) (total time: 1.257186616s):\nTrace[1049753301]: [1.200983147s] [1.200959501s] About to write a response\nI1217 09:31:14.861054       1 trace.go:116] Trace[1524573742]: \"List etcd3\" key:/limitranges/hostpath-6897,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:14.344627935 +0000 UTC m=+128.080014541) (total time: 516.382992ms):\nTrace[1524573742]: [516.382992ms] [516.382992ms] END\nI1217 09:31:14.861171       1 trace.go:116] Trace[833655586]: \"List\" url:/api/v1/namespaces/hostpath-6897/limitranges,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:31:14.344612447 +0000 UTC m=+128.079999018) (total time: 516.531518ms):\nTrace[833655586]: [516.463904ms] [516.457371ms] Listing from storage done\nI1217 09:31:14.870373       1 trace.go:116] Trace[1768738990]: \"Get\" url:/api/v1/namespaces/container-lifecycle-hook-9198/pods/pod-handle-http-request,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:14.275939049 +0000 UTC m=+128.011325695) (total time: 594.38611ms):\nTrace[1768738990]: [594.318781ms] [594.308362ms] About to write a response\nI1217 09:31:14.871127       1 trace.go:116] Trace[1301719934]: \"List etcd3\" key:/pods/kubectl-4807,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:14.31921372 +0000 UTC m=+128.054600309) (total time: 551.883365ms):\nTrace[1301719934]: [551.883365ms] [551.883365ms] END\nI1217 09:31:14.872088       1 trace.go:116] Trace[477377239]: \"List\" url:/api/v1/namespaces/kubectl-4807/pods,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:14.319139731 +0000 UTC m=+128.054526307) (total time: 552.913478ms):\nTrace[477377239]: [552.691641ms] [552.629081ms] Listing from storage done\nI1217 09:31:14.871213       1 trace.go:116] Trace[1038761845]: \"List etcd3\" key:/resourcequotas/watch-4767,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:14.268762165 +0000 UTC m=+128.004148746) (total time: 602.432751ms):\nTrace[1038761845]: [602.432751ms] [602.432751ms] END\nI1217 09:31:14.872840       1 trace.go:116] Trace[1628325192]: \"Delete\" url:/api/v1/namespaces/watch-4767/resourcequotas (started: 2019-12-17 09:31:14.268411623 +0000 UTC m=+128.003798193) (total time: 604.398241ms):\nTrace[1628325192]: [604.398241ms] [604.398241ms] END\nI1217 09:31:14.875643       1 trace.go:116] Trace[680559064]: \"Get\" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:127.0.0.1 (started: 2019-12-17 09:31:14.280537974 +0000 UTC m=+128.015924549) (total time: 595.065658ms):\nTrace[680559064]: [595.022122ms] [595.013151ms] About to write a response\nI1217 09:31:14.875926       1 trace.go:116] Trace[508239495]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:31:14.28784507 +0000 UTC m=+128.023231624) (total time: 588.054253ms):\nTrace[508239495]: [587.990645ms] [587.973994ms] About to write a response\nI1217 09:31:14.911700       1 trace.go:116] Trace[372857297]: \"Create\" url:/api/v1/namespaces/hostpath-6897/pods,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:14.343893097 +0000 UTC m=+128.079279658) (total time: 567.760276ms):\nTrace[372857297]: [521.340013ms] [521.223774ms] About to store object in database\nI1217 09:31:15.228520       1 trace.go:116] Trace[202523413]: \"Delete\" url:/api/v1/namespaces/deployment-1554/pods/webserver-b5dd7476d-9cqpg,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-apps] Deployment iterative rollouts should eventually progress,client:172.17.0.1 (started: 2019-12-17 09:31:13.309567358 +0000 UTC m=+127.044953925) (total time: 1.918286484s):\nTrace[202523413]: [1.918230556s] [1.91821504s] Object deleted from database\nI1217 09:31:15.235909       1 trace.go:116] Trace[948598020]: \"Get\" url:/api/v1/namespaces/downward-api-4478/pods/downwardapi-volume-21850c50-74e0-42d6-848b-659e39400f4b,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:14.705751689 +0000 UTC m=+128.441138257) (total time: 530.08194ms):\nTrace[948598020]: [530.022719ms] [530.010959ms] About to write a response\nI1217 09:31:15.572314       1 trace.go:116] Trace[1079022102]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:31:14.945030347 +0000 UTC m=+128.680416912) (total time: 627.236696ms):\nTrace[1079022102]: [627.188597ms] [627.163172ms] About to write a response\nI1217 09:31:15.572674       1 trace.go:116] Trace[997010743]: \"Get\" url:/apis/batch/v1beta1/namespaces/cronjob-373/cronjobs/replace,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-apps] CronJob should replace jobs when ReplaceConcurrent,client:172.17.0.1 (started: 2019-12-17 09:31:15.001666737 +0000 UTC m=+128.737053305) (total time: 570.977137ms):\nTrace[997010743]: [570.923852ms] [570.905784ms] About to write a response\nI1217 09:31:15.573496       1 trace.go:116] Trace[236223680]: \"Get\" url:/api/v1/namespaces/resourcequota-5312/resourcequotas/test-quota,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.,client:172.17.0.1 (started: 2019-12-17 09:31:15.06595707 +0000 UTC m=+128.801343640) (total time: 507.504982ms):\nTrace[236223680]: [507.429808ms] [507.416986ms] About to write a response\nI1217 09:31:15.576894       1 trace.go:116] Trace[1030968296]: \"Get\" url:/api/v1/namespaces/provisioning-3864/pods/pod-subpath-test-inlinevolume-x2mx,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2019-12-17 09:31:15.054565378 +0000 UTC m=+128.789951945) (total time: 522.26994ms):\nTrace[1030968296]: [522.198774ms] [522.179681ms] About to write a response\nI1217 09:31:15.577835       1 controller.go:606] quota admission added evaluator for: e2e-test-resourcequota-3541-crds.resourcequota.example.com\nI1217 09:31:15.579004       1 trace.go:116] Trace[1306784593]: \"Get\" url:/api/v1/namespaces/hostpath-6897/pods/pod-host-path-test,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:14.924703301 +0000 UTC m=+128.660089870) (total time: 653.239229ms):\nTrace[1306784593]: [653.141753ms] [653.13454ms] About to write a response\nI1217 09:31:15.583963       1 trace.go:116] Trace[2062863574]: \"Create\" url:/api/v1/namespaces/provisioning-3864/events,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:31:14.95326638 +0000 UTC m=+128.688652954) (total time: 630.657775ms):\nTrace[2062863574]: [630.593339ms] [630.407851ms] Object stored in database\nI1217 09:31:15.586687       1 trace.go:116] Trace[1000886123]: \"Create\" url:/api/v1/namespaces/deployment-1554/events,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/5ad586f/system:serviceaccount:kube-system:replicaset-controller,client:172.17.0.3 (started: 2019-12-17 09:31:14.930357237 +0000 UTC m=+128.665743796) (total time: 656.297363ms):\nTrace[1000886123]: [656.250693ms] [654.63631ms] Object stored in database\nI1217 09:31:15.924898       1 trace.go:116] Trace[646212638]: \"List etcd3\" key:/events/watch-4767,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:15.212821622 +0000 UTC m=+128.948208205) (total time: 712.001914ms):\nTrace[646212638]: [712.001914ms] [712.001914ms] END\nI1217 09:31:15.925069       1 trace.go:116] Trace[707707977]: \"Delete\" url:/apis/events.k8s.io/v1beta1/namespaces/watch-4767/events (started: 2019-12-17 09:31:15.212720903 +0000 UTC m=+128.948107478) (total time: 712.327153ms):\nTrace[707707977]: [712.327153ms] [712.327153ms] END\nI1217 09:31:15.925115       1 trace.go:116] Trace[1665739644]: \"Get\" url:/api/v1/namespaces/local-path-storage/endpoints/rancher.io-local-path,user-agent:local-path-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:10.244.0.3 (started: 2019-12-17 09:31:15.309588748 +0000 UTC m=+129.044975340) (total time: 615.481485ms):\nTrace[1665739644]: [615.372651ms] [615.356011ms] About to write a response\nI1217 09:31:15.925597       1 trace.go:116] Trace[148097769]: \"Get\" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.18.0 (linux/amd64) kubernetes/5ad586f/leader-election,client:172.17.0.3 (started: 2019-12-17 09:31:15.211682468 +0000 UTC m=+128.947069040) (total time: 713.880321ms):\nTrace[148097769]: [713.825304ms] [713.803615ms] About to write a response\nI1217 09:31:15.926128       1 trace.go:116] Trace[326107592]: \"List etcd3\" key:/pods/kubectl-4807,resourceVersion:,limit:0,continue: (started: 2019-12-17 09:31:15.318652001 +0000 UTC m=+129.054038585) (total time: 607.448105ms):\nTrace[326107592]: [607.448105ms] [607.448105ms] END\nI1217 09:31:15.926937       1 trace.go:116] Trace[1446409762]: \"List\" url:/api/v1/namespaces/kubectl-4807/pods,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:15.318577562 +0000 UTC m=+129.053964115) (total time: 608.327823ms):\nTrace[1446409762]: [608.119183ms] [608.058156ms] Listing from storage done\nI1217 09:31:15.925684       1 trace.go:116] Trace[1504185836]: \"Get\" url:/api/v1/namespaces/container-lifecycle-hook-9198/pods/pod-with-poststart-http-hook,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:15.238561328 +0000 UTC m=+128.973947895) (total time: 687.089965ms):\nTrace[1504185836]: [687.016922ms] [687.008597ms] About to write a response\nI1217 09:31:15.936774       1 trace.go:116] Trace[1753314756]: \"Get\" url:/api/v1/namespaces/deployment-1554/pods/webserver-79fbcb94c6-5jj5g,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:31:15.229697643 +0000 UTC m=+128.965084218) (total time: 707.036102ms):\nTrace[1753314756]: [707.036102ms] [707.027877ms] END\nI1217 09:31:15.937356       1 trace.go:116] Trace[445677239]: \"Get\" url:/api/v1/namespaces/kubelet-424/pods/cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-7fsdg,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:15.231783479 +0000 UTC m=+128.967170053) (total time: 705.540714ms):\nTrace[445677239]: [705.471381ms] [705.465109ms] About to write a response\nI1217 09:31:15.937806       1 trace.go:116] Trace[833883669]: \"Get\" url:/api/v1/namespaces/container-runtime-8558/pods/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:15.275261761 +0000 UTC m=+129.010648313) (total time: 662.511862ms):\nTrace[833883669]: [662.449805ms] [662.439636ms] About to write a response\nI1217 09:31:16.018430       1 trace.go:116] Trace[533598]: \"GuaranteedUpdate etcd3\" type:*core.Event (started: 2019-12-17 09:31:14.921486917 +0000 UTC m=+128.656873494) (total time: 1.096901496s):\nTrace[533598]: [668.90154ms] [668.90154ms] initial value restored\nTrace[533598]: [1.096884488s] [427.459905ms] Transaction committed\nI1217 09:31:16.018940       1 trace.go:116] Trace[1041304747]: \"Patch\" url:/api/v1/namespaces/pv-9509/events/pod-ephm-test-projected-vdvd.15e11ea9c7b1209f,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:14.921403757 +0000 UTC m=+128.656790327) (total time: 1.09745227s):\nTrace[1041304747]: [668.987377ms] [668.965273ms] About to apply patch\nTrace[1041304747]: [1.097367507s] [428.000186ms] Object stored in database\nI1217 09:31:16.025945       1 trace.go:116] Trace[1641641441]: \"GuaranteedUpdate etcd3\" type:*v1.Endpoints (started: 2019-12-17 09:31:14.878147354 +0000 UTC m=+128.613533934) (total time: 1.147764124s):\nTrace[1641641441]: [331.500103ms] [331.500103ms] initial value restored\nTrace[1641641441]: [697.490269ms] [365.990166ms] Transaction prepared\nTrace[1641641441]: [1.147742114s] [450.251845ms] Transaction committed\nI1217 09:31:16.318025       1 trace.go:116] Trace[1291684880]: \"Create\" url:/apis/resourcequota.example.com/v1/namespaces/resourcequota-5312/e2e-test-resourcequota-3541-crds,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.,client:172.17.0.1 (started: 2019-12-17 09:31:15.576675028 +0000 UTC m=+129.312061594) (total time: 741.299323ms):\nTrace[1291684880]: [741.133097ms] [740.183054ms] Object stored in database\nI1217 09:31:16.323409       1 trace.go:116] Trace[1843127280]: \"Delete\" url:/api/v1/namespaces/projected-3161/pods/downwardapi-volume-d08b10f6-ab37-42a9-bbaa-5f58c0c19d20,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance],client:172.17.0.1 (started: 2019-12-17 09:31:14.339395667 +0000 UTC m=+128.074782235) (total time: 1.983972931s):\nTrace[1843127280]: [1.983901998s] [1.98383185s] Object deleted from database\nI1217 09:31:16.357859       1 trace.go:116] Trace[2106002420]: \"GuaranteedUpdate etcd3\" type:*core.Event (started: 2019-12-17 09:31:15.590311806 +0000 UTC m=+129.325698384) (total time: 767.504312ms):\nTrace[2106002420]: [703.606082ms] [703.606082ms] initial value restored\nI1217 09:31:16.358015       1 trace.go:116] Trace[1083048588]: \"Patch\" url:/api/v1/namespaces/init-container-6513/events/pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f.15e11eab83a3bc03,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.2 (started: 2019-12-17 09:31:15.590227353 +0000 UTC m=+129.325613926) (total time: 767.754813ms):\nTrace[1083048588]: [703.694148ms] [703.665757ms] About to apply patch\nE1217 09:31:16.476473       1 upgradeaware.go:371] Error proxying data from backend to client: write tcp 172.17.0.3:6443->172.17.0.1:34968: write: broken pipe\nI1217 09:31:16.598269       1 trace.go:116] Trace[1015467595]: \"GuaranteedUpdate etcd3\" type:*core.Event (started: 2019-12-17 09:31:16.030235691 +0000 UTC m=+129.765622269) (total time: 567.996798ms):\nTrace[1015467595]: [333.244642ms] [333.244642ms] initial value restored\nTrace[1015467595]: [567.969185ms] [234.411359ms] Transaction committed\nI1217 09:31:16.598531       1 trace.go:116] Trace[2011068876]: \"Patch\" url:/api/v1/namespaces/container-runtime-8558/events/terminate-cmd-rpaa0913944-5d6e-4625-a358-6a8b7cb690cb.15e11eab69b54282,user-agent:kubelet/v1.18.0 (linux/amd64) kubernetes/5ad586f,client:172.17.0.4 (started: 2019-12-17 09:31:16.030151737 +0000 UTC m=+129.765538296) (total time: 568.344914ms):\nTrace[2011068876]: [333.330794ms] [333.300684ms] About to apply patch\nTrace[2011068876]: [568.240216ms] [234.71434ms] Object stored in database\nI1217 09:31:16.624306       1 trace.go:116] Trace[723631110]: \"Delete\" url:/api/v1/namespaces/deployment-1554/pods/webserver-b5dd7476d-fxnx8,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-apps] Deployment iterative rollouts should eventually progress,client:172.17.0.1 (started: 2019-12-17 09:31:15.230240408 +0000 UTC m=+128.965626977) (total time: 1.394025057s):\nTrace[723631110]: [1.393970973s] [1.393947562s] Object deleted from database\nE1217 09:31:19.022703       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.3:36388->172.17.0.2:10250: write: broken pipe\nI1217 09:31:19.354025       1 controller.go:606] quota admission added evaluator for: statefulsets.apps\nW1217 09:31:21.524786       1 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured\nI1217 09:31:34.136157       1 controller.go:606] quota admission added evaluator for: namespaces\nI1217 09:31:49.518636       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:31:49.518684       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:31:49.534162       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:31:49.534466       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:31:49.553516       1 controller.go:606] quota admission added evaluator for: e2e-test-webhook-6747-crds.webhook.example.com\nI1217 09:31:49.825187       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:31:49.825223       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI1217 09:31:49.840025       1 client.go:361] parsed scheme: \"endpoint\"\nI1217 09:31:49.840070       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nE1217 09:31:50.176576       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.3:49120->172.17.0.4:10250: write: broken pipe\nE1217 09:31:50.180809       1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\n==== END logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\n==== START logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kind-control-plane ====\nI1217 09:29:06.807294       1 serving.go:312] Generated self-signed cert in-memory\nI1217 09:29:07.754217       1 controllermanager.go:161] Version: v1.18.0-alpha.0.1812+5ad586f84e16e5\nI1217 09:29:07.755851       1 secure_serving.go:178] Serving securely on 127.0.0.1:10257\nI1217 09:29:07.755997       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nI1217 09:29:07.756947       1 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252\nI1217 09:29:07.757139       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI1217 09:29:07.756056       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI1217 09:29:07.756064       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nE1217 09:29:11.581077       1 leaderelection.go:331] error retrieving resource lock kube-system/kube-controller-manager: endpoints \"kube-controller-manager\" is forbidden: User \"system:kube-controller-manager\" cannot get resource \"endpoints\" in API group \"\" in the namespace \"kube-system\"\nI1217 09:29:15.832555       1 leaderelection.go:252] successfully acquired lease kube-system/kube-controller-manager\nI1217 09:29:15.832728       1 event.go:281] Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"kube-system\", Name:\"kube-controller-manager\", UID:\"67b810fe-8f5c-4e20-b77d-844590e39054\", APIVersion:\"v1\", ResourceVersion:\"203\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_edbbeea2-a172-4368-8b31-816579c4d98d became leader\nI1217 09:29:15.832764       1 event.go:281] Event(v1.ObjectReference{Kind:\"Lease\", Namespace:\"kube-system\", Name:\"kube-controller-manager\", UID:\"ff880d17-cbea-4681-a974-b9abeae94e2d\", APIVersion:\"coordination.k8s.io/v1\", ResourceVersion:\"204\", FieldPath:\"\"}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_edbbeea2-a172-4368-8b31-816579c4d98d became leader\nI1217 09:29:16.099823       1 plugins.go:100] No cloud provider specified.\nI1217 09:29:16.101662       1 shared_informer.go:197] Waiting for caches to sync for tokens\nI1217 09:29:16.202316       1 shared_informer.go:204] Caches are synced for tokens \nI1217 09:29:16.228941       1 controllermanager.go:533] Started \"daemonset\"\nI1217 09:29:16.229078       1 daemon_controller.go:255] Starting daemon sets controller\nI1217 09:29:16.229148       1 shared_informer.go:197] Waiting for caches to sync for daemon sets\nI1217 09:29:16.255489       1 controllermanager.go:533] Started \"csrcleaner\"\nI1217 09:29:16.255643       1 cleaner.go:81] Starting CSR cleaner controller\nI1217 09:29:16.290544       1 controllermanager.go:533] Started \"ttl\"\nI1217 09:29:16.290681       1 ttl_controller.go:116] Starting TTL controller\nI1217 09:29:16.291311       1 shared_informer.go:197] Waiting for caches to sync for TTL\nI1217 09:29:16.326297       1 controllermanager.go:533] Started \"podgc\"\nI1217 09:29:16.326460       1 gc_controller.go:88] Starting GC controller\nI1217 09:29:16.326477       1 shared_informer.go:197] Waiting for caches to sync for GC\nI1217 09:29:16.377431       1 controllermanager.go:533] Started \"namespace\"\nI1217 09:29:16.377721       1 namespace_controller.go:200] Starting namespace controller\nI1217 09:29:16.377749       1 shared_informer.go:197] Waiting for caches to sync for namespace\nI1217 09:29:16.555778       1 controllermanager.go:533] Started \"horizontalpodautoscaling\"\nI1217 09:29:16.555856       1 horizontal.go:168] Starting HPA controller\nI1217 09:29:16.555863       1 shared_informer.go:197] Waiting for caches to sync for HPA\nI1217 09:29:16.805058       1 controllermanager.go:533] Started \"tokencleaner\"\nI1217 09:29:16.805397       1 tokencleaner.go:117] Starting token cleaner controller\nI1217 09:29:16.805692       1 shared_informer.go:197] Waiting for caches to sync for token_cleaner\nI1217 09:29:16.806275       1 shared_informer.go:204] Caches are synced for token_cleaner \nI1217 09:29:17.313657       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling\nI1217 09:29:17.313848       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io\nI1217 09:29:17.313970       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps\nI1217 09:29:17.314028       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints\nI1217 09:29:17.314060       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io\nI1217 09:29:17.314090       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions\nI1217 09:29:17.314130       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io\nI1217 09:29:17.314154       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps\nI1217 09:29:17.314239       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps\nI1217 09:29:17.314336       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch\nI1217 09:29:17.314662       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch\nI1217 09:29:17.314978       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io\nI1217 09:29:17.315051       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io\nI1217 09:29:17.315116       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io\nW1217 09:29:17.315139       1 shared_informer.go:415] resyncPeriod 43448923691898 is smaller than resyncCheckPeriod 69156630008531 and the informer has already started. Changing it to 69156630008531\nI1217 09:29:17.315226       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates\nI1217 09:29:17.315298       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps\nI1217 09:29:17.315332       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io\nI1217 09:29:17.315402       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy\nI1217 09:29:17.315458       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges\nI1217 09:29:17.315647       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts\nI1217 09:29:17.315797       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps\nI1217 09:29:17.315924       1 controllermanager.go:533] Started \"resourcequota\"\nI1217 09:29:17.315974       1 resource_quota_controller.go:271] Starting resource quota controller\nI1217 09:29:17.316423       1 shared_informer.go:197] Waiting for caches to sync for resource quota\nI1217 09:29:17.316781       1 resource_quota_monitor.go:303] QuotaMonitor running\nI1217 09:29:17.352329       1 controllermanager.go:533] Started \"replicaset\"\nI1217 09:29:17.352491       1 replica_set.go:180] Starting replicaset controller\nI1217 09:29:17.352500       1 shared_informer.go:197] Waiting for caches to sync for ReplicaSet\nI1217 09:29:17.554694       1 controllermanager.go:533] Started \"cronjob\"\nI1217 09:29:17.554813       1 cronjob_controller.go:97] Starting CronJob Manager\nI1217 09:29:17.707406       1 controllermanager.go:533] Started \"csrsigning\"\nI1217 09:29:17.707733       1 certificate_controller.go:118] Starting certificate controller \"csrsigning\"\nI1217 09:29:17.708542       1 shared_informer.go:197] Waiting for caches to sync for certificate-csrsigning\nI1217 09:29:17.855702       1 controllermanager.go:533] Started \"csrapproving\"\nI1217 09:29:17.856067       1 certificate_controller.go:118] Starting certificate controller \"csrapproving\"\nI1217 09:29:17.856449       1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving\nI1217 09:29:18.108638       1 controllermanager.go:533] Started \"persistentvolume-binder\"\nI1217 09:29:18.108799       1 pv_controller_base.go:294] Starting persistent volume controller\nI1217 09:29:18.109918       1 shared_informer.go:197] Waiting for caches to sync for persistent volume\nI1217 09:29:18.357406       1 controllermanager.go:533] Started \"persistentvolume-expander\"\nI1217 09:29:18.357724       1 expand_controller.go:319] Starting expand controller\nI1217 09:29:18.358033       1 shared_informer.go:197] Waiting for caches to sync for expand\nI1217 09:29:18.604967       1 controllermanager.go:533] Started \"clusterrole-aggregation\"\nI1217 09:29:18.605494       1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator\nI1217 09:29:18.605523       1 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator\nI1217 09:29:18.855376       1 controllermanager.go:533] Started \"serviceaccount\"\nI1217 09:29:18.855684       1 serviceaccounts_controller.go:116] Starting service account controller\nI1217 09:29:18.856302       1 shared_informer.go:197] Waiting for caches to sync for service account\nI1217 09:29:19.112378       1 controllermanager.go:533] Started \"deployment\"\nI1217 09:29:19.112860       1 deployment_controller.go:152] Starting deployment controller\nI1217 09:29:19.113109       1 shared_informer.go:197] Waiting for caches to sync for deployment\nI1217 09:29:19.359453       1 controllermanager.go:533] Started \"statefulset\"\nI1217 09:29:19.359539       1 stateful_set.go:145] Starting stateful set controller\nI1217 09:29:19.359547       1 shared_informer.go:197] Waiting for caches to sync for stateful set\nI1217 09:29:19.606690       1 node_lifecycle_controller.go:77] Sending events to api server\nE1217 09:29:19.607008       1 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided\nW1217 09:29:19.607044       1 controllermanager.go:525] Skipping \"cloud-node-lifecycle\"\nW1217 09:29:19.607063       1 controllermanager.go:525] Skipping \"ttl-after-finished\"\nW1217 09:29:19.607072       1 controllermanager.go:525] Skipping \"root-ca-cert-publisher\"\nI1217 09:29:19.854682       1 controllermanager.go:533] Started \"replicationcontroller\"\nI1217 09:29:19.855282       1 replica_set.go:180] Starting replicationcontroller controller\nI1217 09:29:19.855607       1 shared_informer.go:197] Waiting for caches to sync for ReplicationController\nI1217 09:29:20.105419       1 controllermanager.go:533] Started \"job\"\nW1217 09:29:20.105453       1 core.go:245] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.\nW1217 09:29:20.105460       1 controllermanager.go:525] Skipping \"route\"\nI1217 09:29:20.105514       1 job_controller.go:143] Starting job controller\nI1217 09:29:20.105524       1 shared_informer.go:197] Waiting for caches to sync for job\nI1217 09:29:20.505801       1 controllermanager.go:533] Started \"disruption\"\nI1217 09:29:20.506216       1 disruption.go:330] Starting disruption controller\nI1217 09:29:20.506246       1 shared_informer.go:197] Waiting for caches to sync for disruption\nE1217 09:29:20.757193       1 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail\nW1217 09:29:20.757464       1 controllermanager.go:525] Skipping \"service\"\nI1217 09:29:20.904356       1 node_lifecycle_controller.go:388] Sending events to api server.\nI1217 09:29:20.904802       1 node_lifecycle_controller.go:423] Controller is using taint based evictions.\nI1217 09:29:20.905589       1 taint_manager.go:162] Sending events to api server.\nI1217 09:29:20.906403       1 node_lifecycle_controller.go:520] Controller will reconcile labels.\nI1217 09:29:20.906671       1 controllermanager.go:533] Started \"nodelifecycle\"\nI1217 09:29:20.906755       1 node_lifecycle_controller.go:554] Starting node controller\nI1217 09:29:20.907758       1 shared_informer.go:197] Waiting for caches to sync for taint\nI1217 09:29:21.158333       1 controllermanager.go:533] Started \"attachdetach\"\nI1217 09:29:21.158448       1 attach_detach_controller.go:342] Starting attach detach controller\nI1217 09:29:21.158461       1 shared_informer.go:197] Waiting for caches to sync for attach detach\nI1217 09:29:21.404082       1 controllermanager.go:533] Started \"pvc-protection\"\nI1217 09:29:21.404158       1 pvc_protection_controller.go:100] Starting PVC protection controller\nI1217 09:29:21.404165       1 shared_informer.go:197] Waiting for caches to sync for PVC protection\nI1217 09:29:21.657534       1 controllermanager.go:533] Started \"endpoint\"\nI1217 09:29:21.657575       1 endpoints_controller.go:181] Starting endpoint controller\nI1217 09:29:21.657598       1 shared_informer.go:197] Waiting for caches to sync for endpoint\nI1217 09:29:21.904139       1 controllermanager.go:533] Started \"bootstrapsigner\"\nW1217 09:29:21.904414       1 controllermanager.go:525] Skipping \"endpointslice\"\nI1217 09:29:21.904593       1 shared_informer.go:197] Waiting for caches to sync for bootstrap_signer\nI1217 09:29:22.828903       1 controllermanager.go:533] Started \"garbagecollector\"\nI1217 09:29:22.832008       1 garbagecollector.go:129] Starting garbage collector controller\nI1217 09:29:22.832765       1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1217 09:29:22.833017       1 graph_builder.go:282] GraphBuilder running\nI1217 09:29:22.849631       1 node_ipam_controller.go:94] Sending events to api server.\nI1217 09:29:32.855500       1 range_allocator.go:82] Sending events to api server.\nI1217 09:29:32.855937       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.\nI1217 09:29:32.856106       1 controllermanager.go:533] Started \"nodeipam\"\nI1217 09:29:32.856359       1 node_ipam_controller.go:162] Starting ipam controller\nI1217 09:29:32.856393       1 shared_informer.go:197] Waiting for caches to sync for node\nI1217 09:29:32.886387       1 controllermanager.go:533] Started \"pv-protection\"\nI1217 09:29:32.886975       1 shared_informer.go:197] Waiting for caches to sync for resource quota\nI1217 09:29:32.887101       1 pv_protection_controller.go:81] Starting PV protection controller\nI1217 09:29:32.887176       1 shared_informer.go:197] Waiting for caches to sync for PV protection\nI1217 09:29:32.903172       1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1217 09:29:32.917454       1 shared_informer.go:204] Caches are synced for bootstrap_signer \nI1217 09:29:32.956764       1 shared_informer.go:204] Caches are synced for certificate-csrapproving \nI1217 09:29:32.959483       1 shared_informer.go:204] Caches are synced for expand \nI1217 09:29:32.988518       1 shared_informer.go:204] Caches are synced for PV protection \nI1217 09:29:33.005756       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator \nI1217 09:29:33.009409       1 shared_informer.go:204] Caches are synced for certificate-csrsigning \nW1217 09:29:33.143102       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-control-plane\" does not exist\nI1217 09:29:33.159298       1 shared_informer.go:204] Caches are synced for node \nI1217 09:29:33.159361       1 range_allocator.go:172] Starting range CIDR allocator\nI1217 09:29:33.159368       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator\nI1217 09:29:33.159375       1 shared_informer.go:204] Caches are synced for cidrallocator \nI1217 09:29:33.168306       1 range_allocator.go:373] Set node kind-control-plane PodCIDR to [10.244.0.0/24]\nI1217 09:29:33.191689       1 shared_informer.go:204] Caches are synced for TTL \nI1217 09:29:33.204672       1 shared_informer.go:204] Caches are synced for PVC protection \nI1217 09:29:33.210421       1 shared_informer.go:204] Caches are synced for persistent volume \nI1217 09:29:33.227525       1 shared_informer.go:204] Caches are synced for GC \nI1217 09:29:33.230267       1 shared_informer.go:204] Caches are synced for daemon sets \nI1217 09:29:33.252488       1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kindnet\", UID:\"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\", APIVersion:\"apps/v1\", ResourceVersion:\"238\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-fw7lc\nI1217 09:29:33.255760       1 shared_informer.go:204] Caches are synced for ReplicaSet \nI1217 09:29:33.255914       1 shared_informer.go:204] Caches are synced for ReplicationController \nI1217 09:29:33.256106       1 shared_informer.go:204] Caches are synced for HPA \nI1217 09:29:33.258927       1 shared_informer.go:204] Caches are synced for attach detach \nI1217 09:29:33.259821       1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kube-proxy\", UID:\"4081bf5a-c468-4624-986f-141f7682e044\", APIVersion:\"apps/v1\", ResourceVersion:\"186\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-f8mcv\nI1217 09:29:33.259909       1 shared_informer.go:204] Caches are synced for stateful set \nI1217 09:29:33.259045       1 shared_informer.go:204] Caches are synced for endpoint \nE1217 09:29:33.290128       1 daemon_controller.go:290] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\", ResourceVersion:\"238\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712171756, loc:(*time.Location)(0x6b4e540)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001358960), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001358980), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0013589a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0013589c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"kindest/kindnetd:0.5.3\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0013589e0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001358a20)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0002546e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0008a08e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001575080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00110a040)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0008a0930)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nI1217 09:29:33.308600       1 shared_informer.go:204] Caches are synced for taint \nI1217 09:29:33.308713       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: \nW1217 09:29:33.309527       1 node_lifecycle_controller.go:1058] Missing timestamp for Node kind-control-plane. Assuming now as a timestamp.\nI1217 09:29:33.309925       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.\nI1217 09:29:33.309587       1 event.go:281] Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"kind-control-plane\", UID:\"02aad5bd-a337-4316-8440-c7d9935250c5\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'RegisteredNode' Node kind-control-plane event: Registered Node kind-control-plane in Controller\nI1217 09:29:33.309314       1 taint_manager.go:186] Starting NoExecuteTaintManager\nI1217 09:29:33.406741       1 shared_informer.go:204] Caches are synced for disruption \nI1217 09:29:33.406777       1 disruption.go:338] Sending events to api server.\nI1217 09:29:33.413507       1 shared_informer.go:204] Caches are synced for deployment \nI1217 09:29:33.419851       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"886abdbc-ccaa-4f61-90ef-17f38897a9f6\", APIVersion:\"apps/v1\", ResourceVersion:\"180\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2\nI1217 09:29:33.422662       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"local-path-storage\", Name:\"local-path-provisioner\", UID:\"4f6a69ea-1a6c-4c32-ba07-6495884b482c\", APIVersion:\"apps/v1\", ResourceVersion:\"268\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set local-path-provisioner-7745554f7f to 1\nI1217 09:29:33.428468       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kube-system\", Name:\"coredns-6955765f44\", UID:\"79d299c6-fcf1-4e08-bc78-d59aea3b0484\", APIVersion:\"apps/v1\", ResourceVersion:\"377\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-whdtq\nI1217 09:29:33.432369       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kube-system\", Name:\"coredns-6955765f44\", UID:\"79d299c6-fcf1-4e08-bc78-d59aea3b0484\", APIVersion:\"apps/v1\", ResourceVersion:\"377\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-rdtng\nI1217 09:29:33.438280       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"local-path-storage\", Name:\"local-path-provisioner-7745554f7f\", UID:\"96058d52-3b8e-47cc-bf0c-8cf614faa721\", APIVersion:\"apps/v1\", ResourceVersion:\"378\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: local-path-provisioner-7745554f7f-jktcl\nI1217 09:29:33.456575       1 shared_informer.go:204] Caches are synced for service account \nI1217 09:29:33.478539       1 shared_informer.go:204] Caches are synced for namespace \nI1217 09:29:33.487589       1 shared_informer.go:204] Caches are synced for resource quota \nI1217 09:29:33.503476       1 shared_informer.go:204] Caches are synced for garbage collector \nI1217 09:29:33.505986       1 shared_informer.go:204] Caches are synced for job \nI1217 09:29:33.516963       1 shared_informer.go:204] Caches are synced for resource quota \nI1217 09:29:33.533558       1 shared_informer.go:204] Caches are synced for garbage collector \nI1217 09:29:33.533587       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage\nW1217 09:29:54.245796       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-worker\" does not exist\nI1217 09:29:54.257010       1 range_allocator.go:373] Set node kind-worker PodCIDR to [10.244.1.0/24]\nI1217 09:29:54.266827       1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kube-proxy\", UID:\"4081bf5a-c468-4624-986f-141f7682e044\", APIVersion:\"apps/v1\", ResourceVersion:\"429\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-h7xw6\nI1217 09:29:54.266885       1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kindnet\", UID:\"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\", APIVersion:\"apps/v1\", ResourceVersion:\"436\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-b98rv\nE1217 09:29:54.286146       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\", UID:\"4081bf5a-c468-4624-986f-141f7682e044\", ResourceVersion:\"429\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712171755, loc:(*time.Location)(0x6b4e540)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001adab20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a94cc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001adab40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001adab60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001adaba0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001877db0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001b90708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"beta.kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00191ba40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"CriticalAddonsOnly\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00012cb40)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001b90748)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again\nW1217 09:29:54.381403       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-worker2\" does not exist\nI1217 09:29:54.388774       1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kindnet\", UID:\"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\", APIVersion:\"apps/v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-4gr5t\nI1217 09:29:54.391286       1 event.go:281] Event(v1.ObjectReference{Kind:\"DaemonSet\", Namespace:\"kube-system\", Name:\"kube-proxy\", UID:\"4081bf5a-c468-4624-986f-141f7682e044\", APIVersion:\"apps/v1\", ResourceVersion:\"498\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-cwrhc\nI1217 09:29:54.390428       1 range_allocator.go:373] Set node kind-worker2 PodCIDR to [10.244.2.0/24]\nE1217 09:29:54.410606       1 daemon_controller.go:290] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"2a4e41ea-d2ad-4d3f-a2af-f3e983d7ed09\", ResourceVersion:\"494\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712171756, loc:(*time.Location)(0x6b4e540)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018e1020), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018e1040), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018e1060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0018e1080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"kindest/kindnetd:0.5.3\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0018e10a0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0018e10e0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0000f5450), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc00191d328), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011905a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00012c908)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00191d370)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nE1217 09:29:54.416404       1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\", UID:\"4081bf5a-c468-4624-986f-141f7682e044\", ResourceVersion:\"498\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712171755, loc:(*time.Location)(0x6b4e540)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a3f2c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001860ec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a3f2e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a3f300), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1812_5ad586f84e16e5\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001a3f340)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0017d3e00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0019d9918), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"beta.kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001339320), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"CriticalAddonsOnly\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e1c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0019d9958)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again\nW1217 09:29:58.315467       1 node_lifecycle_controller.go:1058] Missing timestamp for Node kind-worker. Assuming now as a timestamp.\nI1217 09:29:58.315916       1 event.go:281] Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"kind-worker\", UID:\"d0d178e3-45e1-428d-8224-18ccb519c892\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker event: Registered Node kind-worker in Controller\nI1217 09:29:58.316197       1 event.go:281] Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"kind-worker2\", UID:\"8d9fe470-04b7-40aa-844e-6b31492ca35f\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker2 event: Registered Node kind-worker2 in Controller\nW1217 09:29:58.316452       1 node_lifecycle_controller.go:1058] Missing timestamp for Node kind-worker2. Assuming now as a timestamp.\nI1217 09:30:18.320574       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.\nI1217 09:30:58.514215       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"services-7332\", Name:\"externalsvc\", UID:\"8276fae2-e31e-422d-a5a3-384984d9e5db\", APIVersion:\"v1\", ResourceVersion:\"818\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: externalsvc-swdz6\nI1217 09:30:58.531164       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"services-7332\", Name:\"externalsvc\", UID:\"8276fae2-e31e-422d-a5a3-384984d9e5db\", APIVersion:\"v1\", ResourceVersion:\"818\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: externalsvc-vfwrt\nI1217 09:30:59.433913       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-xh9qx\nI1217 09:30:59.466356       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-6kdkd\nI1217 09:30:59.466410       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-drzbw\nI1217 09:30:59.515610       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-vpgp7\nI1217 09:30:59.516417       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-t4qt4\nI1217 09:30:59.516455       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-9fnd9\nI1217 09:30:59.516604       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-jsxr8\nI1217 09:30:59.540131       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-l4ctd\nI1217 09:30:59.540301       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-6qchg\nI1217 09:30:59.540423       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-7fsdg\nI1217 09:30:59.540643       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-cjcbx\nI1217 09:30:59.541765       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-5vtqm\nI1217 09:30:59.545595       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-9hn8q\nI1217 09:30:59.545677       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-k28dr\nI1217 09:30:59.546091       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-bkmcp\nI1217 09:30:59.596549       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-thk8l\nI1217 09:30:59.600093       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-vntcj\nI1217 09:30:59.619690       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-zpqhp\nI1217 09:30:59.620715       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-snppg\nI1217 09:30:59.621342       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubelet-424\", Name:\"cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf\", UID:\"0bbd21e3-d3fb-4b8e-bc51-65444213cde1\", APIVersion:\"v1\", ResourceVersion:\"921\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cleanup20-9c9dce92-a6ba-4749-9927-e3b1095d11bf-n5mz6\nI1217 09:30:59.696678       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"986\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-595b5b9587 to 6\nI1217 09:30:59.714752       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"992\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-gn9zc\nI1217 09:30:59.725723       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"992\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-lwdsv\nI1217 09:30:59.725845       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"992\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-svwjr\nI1217 09:30:59.750427       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"992\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-tw9tn\nI1217 09:30:59.750816       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"992\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-m5zs7\nI1217 09:30:59.751357       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"992\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-b6mgb\nI1217 09:31:00.810341       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-3026\", Name:\"simpletest.rc\", UID:\"5ad64da4-7820-484e-ac82-7c0541e4c86f\", APIVersion:\"v1\", ResourceVersion:\"1079\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-5ntjn\nI1217 09:31:00.828801       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"gc-3026\", Name:\"simpletest.rc\", UID:\"5ad64da4-7820-484e-ac82-7c0541e4c86f\", APIVersion:\"v1\", ResourceVersion:\"1079\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: simpletest.rc-h42f5\nI1217 09:31:01.315440       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"kubectl-4807\", Name:\"agnhost-master\", UID:\"75048bea-d7bd-4888-9a0a-7d108fa60e61\", APIVersion:\"v1\", ResourceVersion:\"1110\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: agnhost-master-wwznp\nI1217 09:31:01.724667       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1125\", FieldPath:\"\"}): type: 'Warning' reason: 'DeploymentRollbackRevisionNotFound' Unable to find last revision.\nI1217 09:31:02.204034       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1143\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-79fbcb94c6 to 2\nI1217 09:31:02.225105       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1144\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-9wt8b\nI1217 09:31:02.233185       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1143\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-595b5b9587 to 5\nI1217 09:31:02.244361       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1144\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-5jj5g\nI1217 09:31:02.284852       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1146\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-79fbcb94c6 to 3\nI1217 09:31:02.315552       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1148\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-lwdsv\nI1217 09:31:02.366596       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1170\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-5gwqp\nI1217 09:31:03.767669       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1226\", FieldPath:\"\"}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment \"webserver\" to revision 1\nE1217 09:31:04.169133       1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-2301/default: secrets \"default-token-dxgxv\" is forbidden: unable to create new content in namespace kubectl-2301 because it is being terminated\nE1217 09:31:04.305465       1 tokens_controller.go:260] error synchronizing serviceaccount multi-az-444/default: secrets \"default-token-s4txz\" is forbidden: unable to create new content in namespace multi-az-444 because it is being terminated\nE1217 09:31:04.305860       1 tokens_controller.go:260] error synchronizing serviceaccount nettest-391/default: secrets \"default-token-tmxjq\" is forbidden: unable to create new content in namespace nettest-391 because it is being terminated\nI1217 09:31:04.306682       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-resourcequota-3541-crds.resourcequota.example.com\nI1217 09:31:04.306847       1 shared_informer.go:197] Waiting for caches to sync for resource quota\nI1217 09:31:04.407118       1 shared_informer.go:204] Caches are synced for resource quota \nI1217 09:31:04.622906       1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1217 09:31:04.622973       1 shared_informer.go:204] Caches are synced for garbage collector \nI1217 09:31:05.210422       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1274\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-595b5b9587 to 6\nI1217 09:31:05.220222       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1275\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-mqn78\nI1217 09:31:05.264609       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1287\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-bw6dn\nI1217 09:31:05.282926       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1277\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-595b5b9587 to 7\nI1217 09:31:06.746093       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1328\", FieldPath:\"\"}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment \"webserver\" to revision 2\nI1217 09:31:06.766838       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1329\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-595b5b9587 to 6\nI1217 09:31:06.778859       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1332\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-79fbcb94c6 to 4\nI1217 09:31:06.786037       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1333\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-bw6dn\nI1217 09:31:06.796430       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1336\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-d42gq\nE1217 09:31:06.994069       1 tokens_controller.go:260] error synchronizing serviceaccount metrics-grabber-1775/default: secrets \"default-token-kzj8m\" is forbidden: unable to create new content in namespace metrics-grabber-1775 because it is being terminated\nI1217 09:31:07.500983       1 resource_quota_controller.go:305] Resource quota has been deleted resourcequota-5312/quota-for-e2e-test-resourcequota-3541-crds\nI1217 09:31:07.971436       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1385\", FieldPath:\"\"}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment \"webserver\" to revision 3\nI1217 09:31:08.047999       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1388\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-ws6vv\nI1217 09:31:08.109656       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1397\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-595b5b9587-h2qnn\nI1217 09:31:08.344445       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1353\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-jt9s7\nI1217 09:31:08.443729       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1416\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-xbbk6\nI1217 09:31:09.331907       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1455\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-595b5b9587 to 2\nI1217 09:31:09.354523       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1460\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 4\nI1217 09:31:09.385000       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1465\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-fxnx8\nI1217 09:31:09.402296       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1465\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-9cqpg\nI1217 09:31:09.403085       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1465\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-4nlqf\nI1217 09:31:09.427974       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1462\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-h2qnn\nI1217 09:31:09.429057       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1462\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-mqn78\nI1217 09:31:09.444518       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1462\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-ws6vv\nI1217 09:31:09.455035       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1462\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-svwjr\nI1217 09:31:09.465357       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1465\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-pjnt5\nI1217 09:31:09.737006       1 namespace_controller.go:185] Namespace has been deleted multi-az-444\nI1217 09:31:09.827782       1 namespace_controller.go:185] Namespace has been deleted kubectl-2301\nI1217 09:31:09.852824       1 namespace_controller.go:185] Namespace has been deleted nettest-391\nI1217 09:31:11.498652       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1541\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 5\nI1217 09:31:11.563221       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1541\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-79fbcb94c6 to 5\nI1217 09:31:11.563529       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1543\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-8mxww\nI1217 09:31:11.625579       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1545\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-xlrmr\nI1217 09:31:12.292959       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-1775\nI1217 09:31:12.402845       1 namespace_controller.go:185] Namespace has been deleted zone-support-6197\nI1217 09:31:13.305354       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"1581\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-s8rxp\nI1217 09:31:13.305708       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1555\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-ckjbc\nI1217 09:31:14.924199       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1610\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-qtv6r\nE1217 09:31:15.562194       1 pv_controller.go:1336] error finding provisioning plugin for claim provisioning-2780/pvc-w89kd: storageclass.storage.k8s.io \"provisioning-2780\" not found\nI1217 09:31:15.562652       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-2780\", Name:\"pvc-w89kd\", UID:\"f9b83b71-a6dd-4ea0-8fce-13b63af95974\", APIVersion:\"v1\", ResourceVersion:\"1637\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-2780\" not found\nI1217 09:31:16.363998       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1638\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-s7rh9\nI1217 09:31:16.895495       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1679\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-gqm6m\nE1217 09:31:18.998380       1 pv_controller.go:1336] error finding provisioning plugin for claim volume-8426/pvc-hml95: storageclass.storage.k8s.io \"volume-8426\" not found\nI1217 09:31:18.998909       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"volume-8426\", Name:\"pvc-hml95\", UID:\"81529f5f-6111-44d7-b3e0-47ac9efb9e60\", APIVersion:\"v1\", ResourceVersion:\"1810\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"volume-8426\" not found\nE1217 09:31:19.135680       1 pv_controller.go:1336] error finding provisioning plugin for claim persistent-local-volumes-test-9305/pvc-lhghq: no volume plugin matched\nE1217 09:31:19.136384       1 pv_controller.go:1336] error finding provisioning plugin for claim provisioning-635/pvc-4zhww: storageclass.storage.k8s.io \"provisioning-635\" not found\nI1217 09:31:19.136220       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"persistent-local-volumes-test-9305\", Name:\"pvc-lhghq\", UID:\"2e9bf98e-2f5e-4ecc-8484-a752e592b609\", APIVersion:\"v1\", ResourceVersion:\"1823\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' no volume plugin matched\nI1217 09:31:19.136552       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-635\", Name:\"pvc-4zhww\", UID:\"6abca6ab-b78a-4e39-b38a-9fbefa620641\", APIVersion:\"v1\", ResourceVersion:\"1824\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-635\" not found\nE1217 09:31:19.381527       1 pv_controller.go:1336] error finding provisioning plugin for claim provisioning-6946/pvc-qb4s8: storageclass.storage.k8s.io \"provisioning-6946\" not found\nI1217 09:31:19.381948       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-6946\", Name:\"pvc-qb4s8\", UID:\"1778a358-4df6-4bf7-9b77-8d085cd90826\", APIVersion:\"v1\", ResourceVersion:\"1841\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-6946\" not found\nI1217 09:31:19.589494       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"statefulset-4376\", Name:\"datadir-ss-0\", UID:\"882c2e4c-59dd-4d40-8011-ff3ad9289695\", APIVersion:\"v1\", ResourceVersion:\"1850\", FieldPath:\"\"}): type: 'Normal' reason: 'WaitForFirstConsumer' waiting for first consumer to be created before binding\nI1217 09:31:19.590061       1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"statefulset-4376\", Name:\"ss\", UID:\"ef8ecda8-bd97-421f-9beb-dad3c5725580\", APIVersion:\"apps/v1\", ResourceVersion:\"1844\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nI1217 09:31:19.606114       1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"statefulset-4376\", Name:\"ss\", UID:\"ef8ecda8-bd97-421f-9beb-dad3c5725580\", APIVersion:\"apps/v1\", ResourceVersion:\"1844\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod ss-0 in StatefulSet ss successful\nI1217 09:31:19.748347       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"statefulset-4376\", Name:\"datadir-ss-0\", UID:\"882c2e4c-59dd-4d40-8011-ff3ad9289695\", APIVersion:\"v1\", ResourceVersion:\"1863\", FieldPath:\"\"}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner \"rancher.io/local-path\" or manually created by system administrator\nE1217 09:31:21.527489       1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:21.990227       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1713\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-595b5b9587 to 1\nI1217 09:31:22.018695       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"1932\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 6\nI1217 09:31:22.087335       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"1936\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-sfckq\nI1217 09:31:22.087895       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"1931\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-tw9tn\nE1217 09:31:22.542798       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:23.545209       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:23.743285       1 tokens_controller.go:260] error synchronizing serviceaccount kubectl-4807/default: secrets \"default-token-424kv\" is forbidden: unable to create new content in namespace kubectl-4807 because it is being terminated\nI1217 09:31:23.902276       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2018\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-595b5b9587 to 0\nI1217 09:31:23.925142       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2031\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 7\nI1217 09:31:23.946223       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2033\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-hx6d8\nI1217 09:31:23.946273       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-595b5b9587\", UID:\"0600d7dd-3348-46e7-a963-f09d56c9ee06\", APIVersion:\"apps/v1\", ResourceVersion:\"2030\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-595b5b9587-b6mgb\nE1217 09:31:24.338350       1 tokens_controller.go:260] error synchronizing serviceaccount projected-7871/default: secrets \"default-token-ct8h2\" is forbidden: unable to create new content in namespace projected-7871 because it is being terminated\nI1217 09:31:24.356763       1 namespace_controller.go:185] Namespace has been deleted watch-4767\nI1217 09:31:24.409897       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2082\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 8\nI1217 09:31:24.415910       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2083\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-pccv8\nE1217 09:31:24.547939       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:25.032644       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2106\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-79fbcb94c6 to 4\nI1217 09:31:25.045425       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2106\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 9\nI1217 09:31:25.051183       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2112\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-79fbcb94c6-xlrmr\nI1217 09:31:25.051776       1 namespace_controller.go:185] Namespace has been deleted security-context-6518\nI1217 09:31:25.056353       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2116\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-8rwsq\nE1217 09:31:25.550900       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:26.023547       1 tokens_controller.go:260] error synchronizing serviceaccount resourcequota-5312/default: secrets \"default-token-d45nq\" is forbidden: unable to create new content in namespace resourcequota-5312 because it is being terminated\nI1217 09:31:26.113847       1 resource_quota_controller.go:305] Resource quota has been deleted resourcequota-5312/test-quota\nE1217 09:31:26.555377       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:27.129551       1 tokens_controller.go:260] error synchronizing serviceaccount hostpath-6897/default: secrets \"default-token-rcnc2\" is forbidden: unable to create new content in namespace hostpath-6897 because it is being terminated\nE1217 09:31:27.492585       1 tokens_controller.go:260] error synchronizing serviceaccount node-lease-test-1530/default: secrets \"default-token-hkrpn\" is forbidden: unable to create new content in namespace node-lease-test-1530 because it is being terminated\nE1217 09:31:27.533249       1 tokens_controller.go:260] error synchronizing serviceaccount tables-4544/default: secrets \"default-token-8gmbk\" is forbidden: unable to create new content in namespace tables-4544 because it is being terminated\nE1217 09:31:27.564194       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:27.764247       1 namespace_controller.go:185] Namespace has been deleted projected-3161\nE1217 09:31:28.567202       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:29.100766       1 pv_controller.go:1336] error finding provisioning plugin for claim provisioning-8292/pvc-8wfml: storageclass.storage.k8s.io \"provisioning-8292\" not found\nI1217 09:31:29.101191       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-8292\", Name:\"pvc-8wfml\", UID:\"df331e4a-827a-4753-8ace-b93e7fd018d3\", APIVersion:\"v1\", ResourceVersion:\"2232\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-8292\" not found\nI1217 09:31:29.375592       1 namespace_controller.go:185] Namespace has been deleted emptydir-2067\nI1217 09:31:29.466285       1 namespace_controller.go:185] Namespace has been deleted projected-7871\nE1217 09:31:29.570810       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:30.574279       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:30.648549       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2141\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-79fbcb94c6 to 3\nI1217 09:31:30.660077       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2141\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 10\nI1217 09:31:30.672167       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2266\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-79fbcb94c6-d42gq\nI1217 09:31:30.675609       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2270\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-rbmgw\nI1217 09:31:31.056073       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2283\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-79fbcb94c6 to 2\nI1217 09:31:31.078563       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2306\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-79fbcb94c6-jt9s7\nI1217 09:31:31.147706       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2314\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-hnjr4\nI1217 09:31:31.197550       1 namespace_controller.go:185] Namespace has been deleted resourcequota-5312\nI1217 09:31:31.213519       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2327\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-79fbcb94c6-8t6ln\nI1217 09:31:31.269521       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2303\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-nhblm\nI1217 09:31:31.375222       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2349\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-g8zv4\nI1217 09:31:31.447109       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2358\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-9l6hz\nI1217 09:31:31.494981       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2372\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-76vng\nE1217 09:31:31.583606       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:31.890875       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2391\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-b5dd7476d to 12\nI1217 09:31:31.892549       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2392\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-rkdd4\nI1217 09:31:31.904507       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2392\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-b5dd7476d-8m2tc\nI1217 09:31:31.941934       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2399\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-b5dd7476d to 11\nI1217 09:31:31.965353       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2405\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-b5dd7476d-76vng\nI1217 09:31:32.343776       1 namespace_controller.go:185] Namespace has been deleted hostpath-6897\nI1217 09:31:32.498495       1 namespace_controller.go:185] Namespace has been deleted provisioning-3864\nE1217 09:31:32.586629       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:32.684218       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-1530\nI1217 09:31:33.567691       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"statefulset-4376\", Name:\"datadir-ss-0\", UID:\"882c2e4c-59dd-4d40-8011-ff3ad9289695\", APIVersion:\"v1\", ResourceVersion:\"1863\", FieldPath:\"\"}): type: 'Normal' reason: 'ExternalProvisioning' waiting for a volume to be created, either by external provisioner \"rancher.io/local-path\" or manually created by system administrator\nE1217 09:31:33.590627       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:34.011483       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2514\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: condition-test-4zwvl\nI1217 09:31:34.030419       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2514\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-lpdtw\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.040122       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2514\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: condition-test-mvrv4\nE1217 09:31:34.049102       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-lpdtw\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.055573       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2514\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-28zdl\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.101306       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-28zdl\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.105761       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-g2z6s\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.106124       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2533\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-g2z6s\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.116175       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-rb9fc\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.117364       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2533\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-rb9fc\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.158847       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-6wzbr\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.162250       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2533\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-6wzbr\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.245802       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-55f2q\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.246362       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2533\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-55f2q\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.422924       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-fbj45\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.423513       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2533\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-fbj45\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE1217 09:31:34.594388       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:34.668017       1 shared_informer.go:197] Waiting for caches to sync for resource quota\nI1217 09:31:34.668334       1 shared_informer.go:204] Caches are synced for resource quota \nE1217 09:31:34.748460       1 replica_set.go:534] sync \"replication-controller-2758/condition-test\" failed with pods \"condition-test-ps2vx\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.748799       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicationController\", Namespace:\"replication-controller-2758\", Name:\"condition-test\", UID:\"dbc12783-3bdb-46e0-8f63-5915f58f52fa\", APIVersion:\"v1\", ResourceVersion:\"2533\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedCreate' Error creating: pods \"condition-test-ps2vx\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI1217 09:31:34.832300       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"webhook-2744\", Name:\"sample-webhook-deployment\", UID:\"74903160-fd19-4a45-8ac9-d36245be9728\", APIVersion:\"apps/v1\", ResourceVersion:\"2565\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nI1217 09:31:34.879488       1 shared_informer.go:197] Waiting for caches to sync for garbage collector\nI1217 09:31:34.879983       1 shared_informer.go:204] Caches are synced for garbage collector \nI1217 09:31:34.880805       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"webhook-2744\", Name:\"sample-webhook-deployment-5f65f8c764\", UID:\"bd8bb754-91db-47c1-b4e5-29faf35f63b5\", APIVersion:\"apps/v1\", ResourceVersion:\"2569\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: sample-webhook-deployment-5f65f8c764-457k5\nE1217 09:31:35.596324       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:35.819187       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"webhook-3924\", Name:\"sample-webhook-deployment\", UID:\"4ccbb446-3497-436b-9198-3d3c9577c9c0\", APIVersion:\"apps/v1\", ResourceVersion:\"2628\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set sample-webhook-deployment-5f65f8c764 to 1\nI1217 09:31:35.877570       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"webhook-3924\", Name:\"sample-webhook-deployment-5f65f8c764\", UID:\"62fc704b-1779-4903-b72e-a42a9271cd5f\", APIVersion:\"apps/v1\", ResourceVersion:\"2629\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: sample-webhook-deployment-5f65f8c764-2wjwt\nE1217 09:31:36.600435       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:36.758046       1 pv_controller.go:1336] error finding provisioning plugin for claim provisioning-209/pvc-nqc2k: storageclass.storage.k8s.io \"provisioning-209\" not found\nI1217 09:31:36.758769       1 event.go:281] Event(v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"provisioning-209\", Name:\"pvc-nqc2k\", UID:\"c7d3e30e-eaf5-4b28-8025-c20be18fa071\", APIVersion:\"v1\", ResourceVersion:\"2679\", FieldPath:\"\"}): type: 'Warning' reason: 'ProvisioningFailed' storageclass.storage.k8s.io \"provisioning-209\" not found\nE1217 09:31:37.603281       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:37.803759       1 tokens_controller.go:260] error synchronizing serviceaccount pv-9509/default: secrets \"default-token-qxjpg\" is forbidden: unable to create new content in namespace pv-9509 because it is being terminated\nE1217 09:31:38.576732       1 tokens_controller.go:260] error synchronizing serviceaccount port-forwarding-3739/default: secrets \"default-token-mxwxr\" is forbidden: unable to create new content in namespace port-forwarding-3739 because it is being terminated\nE1217 09:31:38.606219       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:38.644283       1 event.go:281] Event(v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"services-7332\", Name:\"externalsvc\", UID:\"17fea153-5636-4fe9-8185-5bfbbc6110d4\", APIVersion:\"v1\", ResourceVersion:\"1821\", FieldPath:\"\"}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint services-7332/externalsvc: Operation cannot be fulfilled on endpoints \"externalsvc\": the object has been modified; please apply your changes to the latest version and try again\nE1217 09:31:39.042185       1 tokens_controller.go:260] error synchronizing serviceaccount downward-api-4478/default: secrets \"default-token-kdvcr\" is forbidden: unable to create new content in namespace downward-api-4478 because it is being terminated\nI1217 09:31:39.166957       1 namespace_controller.go:185] Namespace has been deleted tables-4544\nE1217 09:31:39.312825       1 tokens_controller.go:260] error synchronizing serviceaccount container-lifecycle-hook-9198/default: secrets \"default-token-s7656\" is forbidden: unable to create new content in namespace container-lifecycle-hook-9198 because it is being terminated\nE1217 09:31:39.608628       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:39.956756       1 tokens_controller.go:260] error synchronizing serviceaccount downward-api-8407/default: secrets \"default-token-h47ds\" is forbidden: unable to create new content in namespace downward-api-8407 because it is being terminated\nI1217 09:31:40.480479       1 resource_quota_controller.go:305] Resource quota has been deleted replication-controller-2758/condition-test\nE1217 09:31:40.610961       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:41.102534       1 tokens_controller.go:260] error synchronizing serviceaccount persistent-local-volumes-test-9305/default: secrets \"default-token-sbx8g\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9305 because it is being terminated\nE1217 09:31:41.220651       1 tokens_controller.go:260] error synchronizing serviceaccount gc-3026/default: secrets \"default-token-8l59p\" is forbidden: unable to create new content in namespace gc-3026 because it is being terminated\nI1217 09:31:41.334383       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2873\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-794bd8777b to 1\nI1217 09:31:41.344280       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-794bd8777b\", UID:\"43f0f0c5-ffce-41e1-a126-f67225a96a61\", APIVersion:\"apps/v1\", ResourceVersion:\"2874\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-794bd8777b-thbnj\nI1217 09:31:41.356581       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2873\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-79fbcb94c6 to 0\nI1217 09:31:41.396749       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2873\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set webserver-b5dd7476d to 9\nI1217 09:31:41.465003       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2880\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-79fbcb94c6-8t6ln\nI1217 09:31:41.465353       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-79fbcb94c6\", UID:\"d4cb9edb-c4c7-4a34-8930-c458bd7642cb\", APIVersion:\"apps/v1\", ResourceVersion:\"2880\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-79fbcb94c6-hnjr4\nI1217 09:31:41.475668       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1554\", Name:\"webserver\", UID:\"ca807157-460e-41b8-810d-510f844e2409\", APIVersion:\"apps/v1\", ResourceVersion:\"2876\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-794bd8777b to 5\nI1217 09:31:41.542845       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-794bd8777b\", UID:\"43f0f0c5-ffce-41e1-a126-f67225a96a61\", APIVersion:\"apps/v1\", ResourceVersion:\"2889\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-794bd8777b-gfmmv\nI1217 09:31:41.547805       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2883\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-b5dd7476d-nhblm\nI1217 09:31:41.548465       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-b5dd7476d\", UID:\"993ac610-95d3-43fe-8906-6dd3362d3275\", APIVersion:\"apps/v1\", ResourceVersion:\"2883\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: webserver-b5dd7476d-9l6hz\nI1217 09:31:41.579602       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-794bd8777b\", UID:\"43f0f0c5-ffce-41e1-a126-f67225a96a61\", APIVersion:\"apps/v1\", ResourceVersion:\"2889\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-794bd8777b-7jhrk\nI1217 09:31:41.579979       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-794bd8777b\", UID:\"43f0f0c5-ffce-41e1-a126-f67225a96a61\", APIVersion:\"apps/v1\", ResourceVersion:\"2889\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-794bd8777b-wthd2\nE1217 09:31:41.616830       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:41.626658       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1554\", Name:\"webserver-794bd8777b\", UID:\"43f0f0c5-ffce-41e1-a126-f67225a96a61\", APIVersion:\"apps/v1\", ResourceVersion:\"2889\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-794bd8777b-lblxf\nE1217 09:31:42.619792       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:43.064229       1 resource_quota_controller.go:305] Resource quota has been deleted resourcequota-8791/test-quota\nI1217 09:31:43.461475       1 namespace_controller.go:185] Namespace has been deleted pv-9509\nE1217 09:31:43.623994       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:43.669776       1 namespace_controller.go:185] Namespace has been deleted pv-7797\nI1217 09:31:43.898371       1 event.go:281] Event(v1.ObjectReference{Kind:\"StatefulSet\", Namespace:\"statefulset-3860\", Name:\"ss2\", UID:\"63f44d0d-a59b-4eb8-b4f5-ddaa22813068\", APIVersion:\"apps/v1\", ResourceVersion:\"3011\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' create Pod ss2-0 in StatefulSet ss2 successful\nI1217 09:31:44.210473       1 namespace_controller.go:185] Namespace has been deleted downward-api-4478\nI1217 09:31:44.365788       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-3089\", Name:\"frontend\", UID:\"ee278408-8d94-43fe-aded-79f83c45b192\", APIVersion:\"apps/v1\", ResourceVersion:\"3027\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set frontend-6c5f89d5d4 to 3\nI1217 09:31:44.375074       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-3089\", Name:\"frontend-6c5f89d5d4\", UID:\"eb66a2a1-9305-415b-9a7d-5da07fa31aab\", APIVersion:\"apps/v1\", ResourceVersion:\"3028\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6c5f89d5d4-r8sw7\nI1217 09:31:44.378976       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-3089\", Name:\"frontend-6c5f89d5d4\", UID:\"eb66a2a1-9305-415b-9a7d-5da07fa31aab\", APIVersion:\"apps/v1\", ResourceVersion:\"3028\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6c5f89d5d4-pgl78\nI1217 09:31:44.386997       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-3089\", Name:\"frontend-6c5f89d5d4\", UID:\"eb66a2a1-9305-415b-9a7d-5da07fa31aab\", APIVersion:\"apps/v1\", ResourceVersion:\"3028\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6c5f89d5d4-mmnqp\nE1217 09:31:44.628698       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:44.651460       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-3089\", Name:\"agnhost-master\", UID:\"842a8f31-81d6-484a-86c0-014a2a49fa32\", APIVersion:\"apps/v1\", ResourceVersion:\"3052\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set agnhost-master-74c46fb7d4 to 1\nI1217 09:31:44.667274       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-3089\", Name:\"agnhost-master-74c46fb7d4\", UID:\"9f5fd3a0-166b-4057-b547-c7c4e407bf6b\", APIVersion:\"apps/v1\", ResourceVersion:\"3053\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: agnhost-master-74c46fb7d4-vnxlp\nI1217 09:31:44.943687       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-3089\", Name:\"agnhost-slave\", UID:\"0dc546ff-e5ef-4341-84e5-4f83c05f0dcd\", APIVersion:\"apps/v1\", ResourceVersion:\"3069\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set agnhost-slave-774cfc759f to 2\nI1217 09:31:44.958858       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-3089\", Name:\"agnhost-slave-774cfc759f\", UID:\"4fa9233d-5647-4018-93b6-080f2fc2eec8\", APIVersion:\"apps/v1\", ResourceVersion:\"3071\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: agnhost-slave-774cfc759f-ctlv8\nI1217 09:31:44.971330       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-3089\", Name:\"agnhost-slave-774cfc759f\", UID:\"4fa9233d-5647-4018-93b6-080f2fc2eec8\", APIVersion:\"apps/v1\", ResourceVersion:\"3071\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: agnhost-slave-774cfc759f-49fln\nE1217 09:31:45.630491       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:45.704775       1 namespace_controller.go:185] Namespace has been deleted replication-controller-2758\nE1217 09:31:46.635211       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:46.996667       1 namespace_controller.go:185] Namespace has been deleted gc-3026\nI1217 09:31:47.163981       1 namespace_controller.go:185] Namespace has been deleted kubectl-2588\nE1217 09:31:47.637443       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:48.143342       1 tokens_controller.go:260] error synchronizing serviceaccount resourcequota-8791/default: secrets \"default-token-r5rgq\" is forbidden: unable to create new content in namespace resourcequota-8791 because it is being terminated\nE1217 09:31:48.641231       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE1217 09:31:48.754782       1 tokens_controller.go:260] error synchronizing serviceaccount downward-api-458/default: secrets \"default-token-fstmd\" is forbidden: unable to create new content in namespace downward-api-458 because it is being terminated\nE1217 09:31:49.647195       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:50.521117       1 event.go:281] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-1665\", Name:\"test-new-deployment\", UID:\"76dcbdaf-b8a9-4552-8dee-1e34fbb0dda0\", APIVersion:\"apps/v1\", ResourceVersion:\"3283\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-new-deployment-595b5b9587 to 1\nI1217 09:31:50.549711       1 event.go:281] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-1665\", Name:\"test-new-deployment-595b5b9587\", UID:\"a64b8c32-88b7-4ae2-bebe-851635e6bb15\", APIVersion:\"apps/v1\", ResourceVersion:\"3284\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-new-deployment-595b5b9587-kdg8n\nE1217 09:31:50.649381       1 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI1217 09:31:50.760318       1 namespace_controller.go:185] Namespace has been deleted kubectl-4807\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kind-control-plane ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-cwrhc ====\nW1217 09:30:03.705033       1 server_others.go:330] Unknown proxy mode \"\", assuming iptables proxy\nI1217 09:30:03.722911       1 node.go:135] Successfully retrieved node IP: 172.17.0.4\nI1217 09:30:03.722975       1 server_others.go:145] Using iptables Proxier.\nI1217 09:30:03.723465       1 server.go:574] Version: v1.18.0-alpha.0.1812+5ad586f84e16e5\nI1217 09:30:03.725681       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1217 09:30:03.726303       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1217 09:30:03.726382       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1217 09:30:03.727236       1 config.go:131] Starting endpoints config controller\nI1217 09:30:03.728164       1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI1217 09:30:03.728120       1 config.go:313] Starting service config controller\nI1217 09:30:03.728565       1 shared_informer.go:197] Waiting for caches to sync for service config\nI1217 09:30:03.828593       1 shared_informer.go:204] Caches are synced for endpoints config \nI1217 09:30:03.828926       1 shared_informer.go:204] Caches are synced for service config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-cwrhc ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-f8mcv ====\nW1217 09:29:40.113334       1 server_others.go:330] Unknown proxy mode \"\", assuming iptables proxy\nI1217 09:29:40.125418       1 node.go:135] Successfully retrieved node IP: 172.17.0.3\nI1217 09:29:40.125466       1 server_others.go:145] Using iptables Proxier.\nI1217 09:29:40.126527       1 server.go:574] Version: v1.18.0-alpha.0.1812+5ad586f84e16e5\nI1217 09:29:40.127429       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1217 09:29:40.127607       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1217 09:29:40.127712       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1217 09:29:40.128997       1 config.go:313] Starting service config controller\nI1217 09:29:40.129241       1 shared_informer.go:197] Waiting for caches to sync for service config\nI1217 09:29:40.130428       1 config.go:131] Starting endpoints config controller\nI1217 09:29:40.133684       1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI1217 09:29:40.232160       1 shared_informer.go:204] Caches are synced for service config \nI1217 09:29:40.234768       1 shared_informer.go:204] Caches are synced for endpoints config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-f8mcv ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-h7xw6 ====\nW1217 09:30:03.090936       1 server_others.go:330] Unknown proxy mode \"\", assuming iptables proxy\nI1217 09:30:03.105019       1 node.go:135] Successfully retrieved node IP: 172.17.0.2\nI1217 09:30:03.105055       1 server_others.go:145] Using iptables Proxier.\nI1217 09:30:03.105334       1 server.go:574] Version: v1.18.0-alpha.0.1812+5ad586f84e16e5\nI1217 09:30:03.108013       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI1217 09:30:03.108530       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI1217 09:30:03.108831       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI1217 09:30:03.109329       1 config.go:313] Starting service config controller\nI1217 09:30:03.109361       1 shared_informer.go:197] Waiting for caches to sync for service config\nI1217 09:30:03.109649       1 config.go:131] Starting endpoints config controller\nI1217 09:30:03.111913       1 shared_informer.go:197] Waiting for caches to sync for endpoints config\nI1217 09:30:03.209563       1 shared_informer.go:204] Caches are synced for service config \nI1217 09:30:03.214263       1 shared_informer.go:204] Caches are synced for endpoints config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-h7xw6 ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-kind-control-plane ====\nI1217 09:29:06.961724       1 serving.go:312] Generated self-signed cert in-memory\nW1217 09:29:08.165314       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: \"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\" due to: configmap \"extension-apiserver-authentication\" not found\nW1217 09:29:08.165487       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: \"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\" due to: configmap \"extension-apiserver-authentication\" not found\nW1217 09:29:11.572131       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nW1217 09:29:11.572453       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nW1217 09:29:11.572625       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.\nW1217 09:29:11.572769       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false\nW1217 09:29:11.622285       1 authorization.go:47] Authorization is disabled\nW1217 09:29:11.622861       1 authentication.go:92] Authentication is disabled\nI1217 09:29:11.623196       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI1217 09:29:11.625590       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259\nI1217 09:29:11.625969       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI1217 09:29:11.625984       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI1217 09:29:11.626036       1 tlsconfig.go:219] Starting DynamicServingCertificateController\nE1217 09:29:11.628254       1 reflector.go:156] cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE1217 09:29:11.630522       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE1217 09:29:11.630947       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE1217 09:29:11.637380       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE1217 09:29:11.637523       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE1217 09:29:11.637594       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE1217 09:29:11.637852       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE1217 09:29:11.638058       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE1217 09:29:11.638228       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE1217 09:29:11.638975       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nE1217 09:29:11.639502       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE1217 09:29:11.639733       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nE1217 09:29:12.631812       1 reflector.go:156] cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE1217 09:29:12.633466       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE1217 09:29:12.641549       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE1217 09:29:12.643022       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE1217 09:29:12.644493       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE1217 09:29:12.644889       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE1217 09:29:12.649504       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE1217 09:29:12.649580       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE1217 09:29:12.649692       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE1217 09:29:12.650916       1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nE1217 09:29:12.651849       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE1217 09:29:12.653560       1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nI1217 09:29:13.726576       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI1217 09:29:13.727338       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...\nI1217 09:29:13.747684       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler\nE1217 09:29:54.273720       1 factory.go:488] pod: kube-system/coredns-6955765f44-rdtng is already present in the active queue\nE1217 09:29:54.295540       1 factory.go:488] pod: local-path-storage/local-path-provisioner-7745554f7f-jktcl is already present in the active queue\nE1217 09:29:55.817688       1 factory.go:488] pod: local-path-storage/local-path-provisioner-7745554f7f-jktcl is already present in the active queue\nE1217 09:31:35.459174       1 factory.go:488] pod: persistent-local-volumes-test-9305/security-context-989f5b22-41a3-45ca-b2ae-8cf269c97f17 is already present in the active queue\nE1217 09:31:41.879737       1 event.go:263] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"security-context-989f5b22-41a3-45ca-b2ae-8cf269c97f17.15e11eb3bd079c12\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-9305\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-9305\", Name:\"security-context-989f5b22-41a3-45ca-b2ae-8cf269c97f17\", UID:\"7e18efb1-146b-45f3-b5b1-e95022cdd67e\", APIVersion:\"v1\", ResourceVersion:\"2937\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: persistent-local-volumes-test-9305/security-context-989f5b22-41a3-45ca-b2ae-8cf269c97f17\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf7645ff7432fa12, ext:155605487049, loc:(*time.Location)(0x2c10240)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf7645ff7432fa12, ext:155605487049, loc:(*time.Location)(0x2c10240)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"security-context-989f5b22-41a3-45ca-b2ae-8cf269c97f17.15e11eb3bd079c12\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9305 because it is being terminated' (will not retry!)\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-kind-control-plane ====\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/default/events\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e8e4b653c6d\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e8e4b653c6d\",\n                \"uid\": \"9a2586ad-0674-41b7-a95b-daf18dfa51ea\",\n                \"resourceVersion\": \"173\",\n                \"creationTimestamp\": \"2019-12-17T09:29:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeHasSufficientMemory\",\n            \"message\": \"Node kind-control-plane status is now: NodeHasSufficientMemory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:01Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:01Z\",\n            \"count\": 5,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e8e4b659041\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e8e4b659041\",\n                \"uid\": \"70a36005-ff72-4741-83a8-cb0e2ad0153b\",\n                \"resourceVersion\": \"163\",\n                \"creationTimestamp\": \"2019-12-17T09:29:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeHasNoDiskPressure\",\n            \"message\": \"Node kind-control-plane status is now: NodeHasNoDiskPressure\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:01Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:01Z\",\n            \"count\": 4,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e8e4b65a9c0\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e8e4b65a9c0\",\n                \"uid\": \"72640be7-5563-4648-b288-d2e4880c04f8\",\n                \"resourceVersion\": \"166\",\n                \"creationTimestamp\": \"2019-12-17T09:29:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeHasSufficientPID\",\n            \"message\": \"Node kind-control-plane status is now: NodeHasSufficientPID\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:01Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:01Z\",\n            \"count\": 5,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e9190314d75\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e9190314d75\",\n                \"uid\": \"fb4cecec-637a-429a-a3cf-708f39039251\",\n                \"resourceVersion\": \"185\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"Starting\",\n            \"message\": \"Starting kubelet.\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e91935577d8\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e91935577d8\",\n                \"uid\": \"8055b1e6-1758-4f26-83db-d9c3dd38a0be\",\n                \"resourceVersion\": \"191\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"CheckLimitsForResolvConf\",\n            \"message\": \"Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e919753f543\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e919753f543\",\n                \"uid\": \"06b8c1c5-9be0-497c-8ee8-6229989f72f3\",\n                \"resourceVersion\": \"192\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeHasSufficientMemory\",\n            \"message\": \"Node kind-control-plane status is now: NodeHasSufficientMemory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e9197541c6c\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e9197541c6c\",\n                \"uid\": \"6f1c2252-d76c-4ef5-971f-b7b6c9443ce1\",\n                \"resourceVersion\": \"193\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeHasNoDiskPressure\",\n            \"message\": \"Node kind-control-plane status is now: NodeHasNoDiskPressure\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e919754309b\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e919754309b\",\n                \"uid\": \"ed162952-bcb6-4e43-995c-94b98c1d4ab7\",\n                \"resourceVersion\": \"194\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeHasSufficientPID\",\n            \"message\": \"Node kind-control-plane status is now: NodeHasSufficientPID\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e919f644220\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e919f644220\",\n                \"uid\": \"7597eb8a-2d38-49c0-bfcb-4e3a75f21856\",\n                \"resourceVersion\": \"196\",\n                \"creationTimestamp\": \"2019-12-17T09:29:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeAllocatableEnforced\",\n            \"message\": \"Updated Node Allocatable limit across pods\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e95cdd5e1bf\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e95cdd5e1bf\",\n                \"uid\": \"3a024f04-fbdd-40d4-9a81-89673f65d804\",\n                \"resourceVersion\": \"374\",\n                \"creationTimestamp\": \"2019-12-17T09:29:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"02aad5bd-a337-4316-8440-c7d9935250c5\"\n            },\n            \"reason\": \"RegisteredNode\",\n            \"message\": \"Node kind-control-plane event: Registered Node kind-control-plane in Controller\",\n            \"source\": {\n                \"component\": \"node-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e97644d8096\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e97644d8096\",\n                \"uid\": \"136958f2-b7d6-411d-9aed-ccb2c995f0e6\",\n                \"resourceVersion\": \"449\",\n                \"creationTimestamp\": \"2019-12-17T09:29:40Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"Starting\",\n            \"message\": \"Starting kube-proxy.\",\n            \"source\": {\n                \"component\": \"kube-proxy\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:40Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:40Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane.15e11e9f93cb1125\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-control-plane.15e11e9f93cb1125\",\n                \"uid\": \"6caf0df1-e327-4c2e-8e0d-27a825a56eed\",\n                \"resourceVersion\": \"618\",\n                \"creationTimestamp\": \"2019-12-17T09:30:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"kind-control-plane\"\n            },\n            \"reason\": \"NodeReady\",\n            \"message\": \"Node kind-control-plane status is now: NodeReady\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:15Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker.15e11e974186db8f\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker.15e11e974186db8f\",\n                \"uid\": \"0e83967a-9735-4cf2-b632-a45a01273511\",\n                \"resourceVersion\": \"482\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-worker\",\n                \"uid\": \"kind-worker\"\n            },\n            \"reason\": \"NodeHasSufficientMemory\",\n            \"message\": \"Node kind-worker status is now: NodeHasSufficientMemory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:39Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 8,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker.15e11e9ba058fe4f\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker.15e11e9ba058fe4f\",\n                \"uid\": \"c07ac7f9-b5ba-401c-a9d5-0577dd3ed256\",\n                \"resourceVersion\": \"550\",\n                \"creationTimestamp\": \"2019-12-17T09:29:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-worker\",\n                \"uid\": \"d0d178e3-45e1-428d-8224-18ccb519c892\"\n            },\n            \"reason\": \"RegisteredNode\",\n            \"message\": \"Node kind-worker event: Registered Node kind-worker in Controller\",\n            \"source\": {\n                \"component\": \"node-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker.15e11e9cbe174a0c\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker.15e11e9cbe174a0c\",\n                \"uid\": \"c73c903a-5f21-46be-b102-8a1fe7b6e76d\",\n                \"resourceVersion\": \"575\",\n                \"creationTimestamp\": \"2019-12-17T09:30:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-worker\",\n                \"uid\": \"kind-worker\"\n            },\n            \"reason\": \"Starting\",\n            \"message\": \"Starting kube-proxy.\",\n            \"source\": {\n                \"component\": \"kube-proxy\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:03Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2.15e11e974a0a5b87\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker2.15e11e974a0a5b87\",\n                \"uid\": \"a5971149-5cbe-49b1-a293-cabe73a4e68f\",\n                \"resourceVersion\": \"510\",\n                \"creationTimestamp\": \"2019-12-17T09:29:54Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-worker2\",\n                \"uid\": \"kind-worker2\"\n            },\n            \"reason\": \"NodeHasSufficientMemory\",\n            \"message\": \"Node kind-worker2 status is now: NodeHasSufficientMemory\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:39Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:54Z\",\n            \"count\": 8,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2.15e11e9ba0594a2c\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker2.15e11e9ba0594a2c\",\n                \"uid\": \"6ff69d1e-d562-49d0-9932-c0fcc4288e93\",\n                \"resourceVersion\": \"551\",\n                \"creationTimestamp\": \"2019-12-17T09:29:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-worker2\",\n                \"uid\": \"8d9fe470-04b7-40aa-844e-6b31492ca35f\"\n            },\n            \"reason\": \"RegisteredNode\",\n            \"message\": \"Node kind-worker2 event: Registered Node kind-worker2 in Controller\",\n            \"source\": {\n                \"component\": \"node-controller\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"lastTimestamp\": \"2019-12-17T09:29:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2.15e11e9ce2e1a813\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/events/kind-worker2.15e11e9ce2e1a813\",\n                \"uid\": \"779c5ab8-9b7f-4e67-9ca3-eb62be9d896b\",\n                \"resourceVersion\": \"577\",\n                \"creationTimestamp\": \"2019-12-17T09:30:03Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Node\",\n                \"name\": \"kind-worker2\",\n                \"uid\": \"kind-worker2\"\n            },\n            \"reason\": \"Starting\",\n            \"message\": \"Starting kube-proxy.\",\n            \"source\": {\n                \"component\": \"kube-proxy\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-12-17T09:30:03Z\",\n            \"lastTimestamp\": \"2019-12-17T09:30:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/default/replicationcontrollers\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/default/services\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kubernetes\",\n                \"namespace\": \"default\",\n                \"selfLink\": \"/api/v1/namespaces/default/services/kubernetes\",\n                \"uid\": \"6e8bce63-14a5-4c85-8306-4a7a21220f18\",\n                \"resourceVersion\": \"148\",\n                \"creationTimestamp\": \"2019-12-17T09:29:13Z\",\n                \"labels\": {\n                    \"component\": \"apiserver\",\n                    \"provider\": \"kubernetes\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"https\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 443,\n                        \"targetPort\": 6443\n                    }\n                ],\n                \"clusterIP\": \"10.96.0.1\",\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/default/daemonsets\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/default/deployments\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/default/replicasets\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/default/pods\",\n        \"resourceVersion\": \"3317\"\n    },\n    \"items\": []\n}\nCluster info dumped to standard output\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Dec 17 09:31:51.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2712" for this suite.

•
... skipping 41 lines ...
• [SLOW TEST:20.028 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 48 lines ...
      Driver local doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:153
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:51.109: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:55.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9316" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 115 lines ...
test/e2e/framework/framework.go:680
  [k8s.io] [sig-node] Clean up pods on node
  test/e2e/framework/framework.go:680
    kubelet should be able to delete 10 pods per node in 1m0s.
    test/e2e/node/kubelet.go:339
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:56.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7557" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":2,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 158 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":19,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
  test/e2e/common/runtime.go:38
    when starting a container that exits
    test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:58.126: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 86 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:31:58.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9949" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Volume Placement
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 106 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:524
    should support exec through kubectl proxy
    test/e2e/kubectl/kubectl.go:618
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":3,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:31:59.158: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 230 lines ...
• [SLOW TEST:62.345 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:00.503: INFO: Driver local doesn't support ext3 -- skipping
... skipping 154 lines ...
Dec 17 09:30:58.323: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
Dec 17 09:30:59.237: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:685
STEP: creating the pod
Dec 17 09:30:59.240: INFO: PodSpec: initContainers in spec.initContainers
Dec 17 09:32:04.133: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f", GenerateName:"", Namespace:"init-container-6513", SelfLink:"/api/v1/namespaces/init-container-6513/pods/pod-init-3bfe7ac6-9b80-4f9a-8275-96259665338f", UID:"7028f972-2f60-4e2b-b220-162afc5ddd1e", ResourceVersion:"4176", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712171859, loc:(*time.Location)(0x7d4d120)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"240749113"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pwx2b", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025382c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pwx2b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pwx2b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pwx2b", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002191eb0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022095c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002191f40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002191f60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002191f68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002191f6c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712171859, loc:(*time.Location)(0x7d4d120)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712171859, loc:(*time.Location)(0x7d4d120)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712171859, loc:(*time.Location)(0x7d4d120)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712171859, loc:(*time.Location)(0x7d4d120)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.2", PodIP:"10.244.1.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.4"}}, StartTime:(*v1.Time)(0xc0021f1040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00159aaf0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00159ab60)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://f25b3edd7f39674b856f79b71d72fa49a1a993db5f03ab201aae38d1c4cb73b3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021f1080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021f1060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002191fef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Dec 17 09:32:04.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6513" for this suite.


• [SLOW TEST:65.827 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:50.386: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
• [SLOW TEST:14.429 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  test/e2e/apps/deployment.go:78
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":5,"skipped":19,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:32:00.640: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
• [SLOW TEST:5.784 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":4,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 43 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 67 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:32:07.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9588" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 67 lines ...
• [SLOW TEST:60.320 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:09.302: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:32:09.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 85 lines ...
• [SLOW TEST:11.722 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
  test/e2e/apimachinery/resource_quota.go:507
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:10.204: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 93 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:10.514: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Dec 17 09:32:10.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 202 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:32:11.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-4193" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 62 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:12.050: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:32:12.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 191 lines ...
• [SLOW TEST:75.053 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  test/e2e/apps/deployment.go:112
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.226 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 98 lines ...
test/e2e/framework/framework.go:680
  When creating a container with runAsNonRoot
  test/e2e/common/security_context.go:97
    should run with an explicit non-root user ID [LinuxOnly]
    test/e2e/common/security_context.go:122
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:15.078: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:32:15.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":3,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:32:16.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-9014" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass","total":-1,"completed":4,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 159 lines ...
Dec 17 09:31:44.947: INFO: stderr: ""
Dec 17 09:31:44.947: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Dec 17 09:31:44.947: INFO: Waiting for all frontend pods to be Running.
Dec 17 09:32:04.999: INFO: Waiting for frontend to serve content.
Dec 17 09:32:05.016: INFO: Trying to add a new entry to the guestbook.
Dec 17 09:32:06.044: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Dec 17 09:32:11.078: INFO: Verifying that added entry can be retrieved.
Dec 17 09:32:11.093: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Dec 17 09:32:16.161: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config delete --grace-period=0 --force -f - --namespace=kubectl-3089'
Dec 17 09:32:16.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 09:32:16.380: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 09:32:16.380: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config delete --grace-period=0 --force -f - --namespace=kubectl-3089'
... skipping 26 lines ...
test/e2e/kubectl/framework.go:23
  Guestbook application
  test/e2e/kubectl/kubectl.go:387
    should create and stop a working application  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:17.633: INFO: Driver local doesn't support ext4 -- skipping
... skipping 70 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  test/e2e/kubectl/kubectl.go:1688
    should support rolling-update to same image  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 79 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:11.262 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:28.910: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
• [SLOW TEST:16.106 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:30.898: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
      Driver local doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:153
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:22.290: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 74 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:31.421: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
      Driver local doesn't support ext3 -- skipping

      test/e2e/storage/testsuites/base.go:153
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:31:57.236: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:226
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":22,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:35.436: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 102 lines ...
• [SLOW TEST:19.382 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":5,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 65 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:680
    should adopt matching orphans and release non-matching pods
    test/e2e/apps/statefulset.go:139
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:32:04.154: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 71 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
  test/e2e/common/runtime.go:38
    when running a container with a new image
    test/e2e/common/runtime.go:263
      should be able to pull image [NodeConformance]
      test/e2e/common/runtime.go:374
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:41.591: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:32:41.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 31 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:32:41.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3218" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:41.846: INFO: Driver local doesn't support ntfs -- skipping
... skipping 88 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":9,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
  test/e2e/kubectl/portforward.go:464
    that expects NO client request
    test/e2e/kubectl/portforward.go:474
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:475
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:42.750: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 86 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:26.359 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  test/e2e/storage/pvc_protection.go:118
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":3,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
test/e2e/framework/framework.go:680
  When creating a pod with privileged
  test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] AppArmor
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 96 lines ...
• [SLOW TEST:20.250 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  test/e2e/apps/deployment.go:115
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":7,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:10.177 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.229 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":58,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:53.948: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 74 lines ...
• [SLOW TEST:14.237 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:11.782 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:55.877: INFO: Driver local doesn't support ext4 -- skipping
... skipping 133 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:32:56.893: INFO: Driver local doesn't support ext3 -- skipping
... skipping 122 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:18.221 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:01.326: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 82 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1267
STEP: Creating statefulset with conflicting port in namespace statefulset-1267
STEP: Waiting until pod test-pod will start running in namespace statefulset-1267
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1267
Dec 17 09:32:25.239: INFO: Observed stateful pod in namespace: statefulset-1267, name: ss-0, uid: f62191b4-5e3a-4e5f-ac65-c89414a05749, status phase: Pending. Waiting for statefulset controller to delete.
Dec 17 09:32:25.623: INFO: Observed stateful pod in namespace: statefulset-1267, name: ss-0, uid: f62191b4-5e3a-4e5f-ac65-c89414a05749, status phase: Failed. Waiting for statefulset controller to delete.
Dec 17 09:32:25.632: INFO: Observed stateful pod in namespace: statefulset-1267, name: ss-0, uid: f62191b4-5e3a-4e5f-ac65-c89414a05749, status phase: Failed. Waiting for statefulset controller to delete.
Dec 17 09:32:25.638: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1267
STEP: Removing pod with conflicting port in namespace statefulset-1267
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1267 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:90
Dec 17 09:32:41.754: INFO: Deleting all statefulset in ns statefulset-1267
... skipping 11 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":4,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 77 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 113 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":35,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 111 lines ...
• [SLOW TEST:12.248 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_downwardapi.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:06.224: INFO: Only supported for providers [azure] (not skeleton)
... skipping 43 lines ...
  test/e2e/common/runtime.go:38
    when running a container with a new image
    test/e2e/common/runtime.go:263
      should be able to pull from private registry with secret [NodeConformance]
      test/e2e/common/runtime.go:385
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":8,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:132.030 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent
  test/e2e/apps/cronjob.go:139
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":2,"skipped":20,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:11.080: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 125 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:524
    should return command exit codes
    test/e2e/kubectl/kubectl.go:644
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":3,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl alpha client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/kubectl/kubectl.go:236
Dec 17 09:33:15.936: INFO: Could not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc001fd75c0), Code:404}}
[AfterEach] Kubectl run CronJob
  test/e2e/kubectl/kubectl.go:232
Dec 17 09:33:15.937: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-5930'
Dec 17 09:33:16.090: INFO: rc: 1
Dec 17 09:33:16.091: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-5930:\nCommand stdout:\n\nstderr:\nError from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-5930:
    Command stdout:
    
    stderr:
    Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found
    
    error:
    exit status 1
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc001411760, 0x0, 0xc000676040, 0xc, 0x4, 0xc0018e3920)
	test/e2e/framework/util.go:701 +0xbc
... skipping 95 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:33:16.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8024" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":4,"skipped":58,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 107 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:329
    should do a rolling update of a replication controller  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 38 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 17 09:33:17.856: INFO: Successfully updated pod "pod-update-activedeadlineseconds-47078666-7ee6-4c6c-98ca-363f3926a20d"
Dec 17 09:33:17.857: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-47078666-7ee6-4c6c-98ca-363f3926a20d" in namespace "pods-9583" to be "terminated due to deadline exceeded"
Dec 17 09:33:17.868: INFO: Pod "pod-update-activedeadlineseconds-47078666-7ee6-4c6c-98ca-363f3926a20d": Phase="Running", Reason="", readiness=true. Elapsed: 11.487896ms
Dec 17 09:33:19.891: INFO: Pod "pod-update-activedeadlineseconds-47078666-7ee6-4c6c-98ca-363f3926a20d": Phase="Running", Reason="", readiness=true. Elapsed: 2.034799864s
Dec 17 09:33:21.898: INFO: Pod "pod-update-activedeadlineseconds-47078666-7ee6-4c6c-98ca-363f3926a20d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.041625697s
Dec 17 09:33:21.898: INFO: Pod "pod-update-activedeadlineseconds-47078666-7ee6-4c6c-98ca-363f3926a20d" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:175
Dec 17 09:33:21.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9583" for this suite.


• [SLOW TEST:18.805 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:21.918: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 40 lines ...
Dec 17 09:33:13.306: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Dec 17 09:33:25.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2828" for this suite.
STEP: Destroying namespace "webhook-2828-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:24.239 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:25.598: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 39 lines ...
      Driver local doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:153
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:02.990: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 46 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 71 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:33:26.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-169" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":4,"skipped":22,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 64 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:524
    should contain last line of the log
    test/e2e/kubectl/kubectl.go:736
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":4,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  test/e2e/kubectl/kubectl.go:1053
    should create/apply a valid CR for CRD with validation schema
    test/e2e/kubectl/kubectl.go:1072
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":7,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:28.779: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 165 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:94
      should support multiple inline ephemeral volumes
      test/e2e/storage/testsuites/ephemeral.go:176
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":5,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:12.484 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  test/e2e/apps/disruption.go:149
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":9,"skipped":46,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Aggregator
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 80 lines ...
Dec 17 09:33:15.592: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:15.597: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:15.618: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:15.642: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:15.648: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:15.655: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:15.669: INFO: Lookups using dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local]

Dec 17 09:33:20.678: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.684: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.688: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.692: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.707: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.712: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.718: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.721: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:20.730: INFO: Lookups using dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local]

Dec 17 09:33:25.683: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.724: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.730: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.750: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.784: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.789: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.807: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.818: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:25.869: INFO: Lookups using dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local]

Dec 17 09:33:30.677: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.681: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.686: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.695: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.720: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.730: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.741: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.745: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local from pod dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210: the server could not find the requested resource (get pods dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210)
Dec 17 09:33:30.758: INFO: Lookups using dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7598.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7598.svc.cluster.local jessie_udp@dns-test-service-2.dns-7598.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7598.svc.cluster.local]

Dec 17 09:33:35.738: INFO: DNS probes using dns-7598/dns-test-e12ee94c-0be1-4609-bf9b-cf589341b210 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:38.327 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 200 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:329
    should scale a replication controller  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":4,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
STEP: Deleting the previously created pod
Dec 17 09:33:24.407: INFO: Deleting pod "pvc-volume-tester-2jnnq" in namespace "csi-mock-volumes-3109"
Dec 17 09:33:24.413: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2jnnq" to be fully deleted
STEP: Checking CSI driver logs
Dec 17 09:33:34.513: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3109","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3109","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3109","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4b8ba502-2983-4459-9574-04c126d9cc1f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-4b8ba502-2983-4459-9574-04c126d9cc1f"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3109","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-3109","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4b8ba502-2983-4459-9574-04c126d9cc1f","storage.kubernetes.io/csiProvisionerIdentity":"1576575173862-8081-csi-mock-csi-mock-volumes-3109"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4b8ba502-2983-4459-9574-04c126d9cc1f/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4b8ba502-2983-4459-9574-04c126d9cc1f","storage.kubernetes.io/csiProvisionerIdentity":"1576575173862-8081-csi-mock-csi-mock-volumes-3109"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4b8ba502-2983-4459-9574-04c126d9cc1f/globalmount","target_path":"/var/lib/kubelet/pods/fbfa6f79-7fd4-4b0f-bc5b-620ab2dba551/volumes/kubernetes.io~csi/pvc-4b8ba502-2983-4459-9574-04c126d9cc1f/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4b8ba502-2983-4459-9574-04c126d9cc1f","storage.kubernetes.io/csiProvisionerIdentity":"1576575173862-8081-csi-mock-csi-mock-volumes-3109"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/fbfa6f79-7fd4-4b0f-bc5b-620ab2dba551/volumes/kubernetes.io~csi/pvc-4b8ba502-2983-4459-9574-04c126d9cc1f/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4b8ba502-2983-4459-9574-04c126d9cc1f/globalmount"},"Response":{},"Error":""}

Dec 17 09:33:34.513: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-2jnnq
Dec 17 09:33:34.513: INFO: Deleting pod "pvc-volume-tester-2jnnq" in namespace "csi-mock-volumes-3109"
STEP: Deleting claim pvc-2j6zm
Dec 17 09:33:34.593: INFO: Waiting up to 2m0s for PersistentVolume pvc-4b8ba502-2983-4459-9574-04c126d9cc1f to get deleted
... skipping 38 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    should not be passed when podInfoOnMount=nil
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":4,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:37.313: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 127 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:66.012 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  test/e2e/apimachinery/garbage_collector.go:972
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":6,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:41.557: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175
Dec 17 09:33:41.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 106 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":42,"failed":0}
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:42.089: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
Dec 17 09:33:41.844: INFO: stderr: ""
Dec 17 09:33:41.844: INFO: stdout: "scheduler controller-manager etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Dec 17 09:33:41.844: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config get componentstatuses scheduler'
Dec 17 09:33:41.983: INFO: stderr: ""
Dec 17 09:33:41.983: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Dec 17 09:33:41.983: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config get componentstatuses controller-manager'
Dec 17 09:33:42.119: INFO: stderr: ""
Dec 17 09:33:42.119: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-0
Dec 17 09:33:42.119: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35987 --kubeconfig=/root/.kube/kind-test-config get componentstatuses etcd-0'
Dec 17 09:33:42.279: INFO: stderr: ""
Dec 17 09:33:42.279: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:175
Dec 17 09:33:42.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7427" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":7,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:42.342: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 82 lines ...
• [SLOW TEST:8.293 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":5,"skipped":25,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 112 lines ...
• [SLOW TEST:10.345 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:44.780: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename zone-support
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 86 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl logs
  test/e2e/kubectl/kubectl.go:1461
    should be able to retrieve and filter logs  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":7,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:46.000: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:33:46.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 100 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:524
    should support port-forward
    test/e2e/kubectl/kubectl.go:751
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:47.482: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:33:47.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 37 lines ...
• [SLOW TEST:14.210 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:48.219: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 102 lines ...
• [SLOW TEST:20.288 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":63,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:97.110 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:49.204: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:33:49.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Only supported for providers [openstack] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1010
------------------------------
SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:34.712: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 79 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:33:52.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1199" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":56,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":49,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:44.282: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  test/e2e/common/runtime.go:38
    when running a container with a new image
    test/e2e/common/runtime.go:263
      should not be able to pull image from invalid registry [NodeConformance]
      test/e2e/common/runtime.go:369
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":6,"skipped":49,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:52.467: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 59 lines ...
  test/e2e/kubectl/portforward.go:464
    that expects a client request
    test/e2e/kubectl/portforward.go:465
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:469
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    test/e2e/common/empty_dir.go:58
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":6,"skipped":35,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:56.343: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 60 lines ...
• [SLOW TEST:19.286 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":63,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:56.767: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 45 lines ...
test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:33:57.770: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Dec 17 09:33:57.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 37 lines ...
• [SLOW TEST:12.199 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":90,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 57 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is non-root
    test/e2e/common/empty_dir.go:54
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":8,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 50 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl copy
  test/e2e/kubectl/kubectl.go:1421
    should copy a file from a running Pod
    test/e2e/kubectl/kubectl.go:1440
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":11,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 44 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:02.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1218" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":28,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":5,"skipped":65,"failed":0}
[BeforeEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:34.951: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:03.470: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 85 lines ...
• [SLOW TEST:7.192 seconds]
[sig-scheduling] LimitRange
test/e2e/scheduling/framework.go:39
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  test/e2e/scheduling/limit_range.go:55
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.","total":-1,"completed":7,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] Cadvisor
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:03.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cadvisor-1337" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Cadvisor should be healthy on every node.","total":-1,"completed":8,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 57 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:04.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1877" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":6,"skipped":29,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:04.294: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 73 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:05.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1384" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":7,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.138 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":102,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:10.320 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  test/e2e/network/dns.go:86
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":12,"skipped":63,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:120.493 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently
  test/e2e/apps/cronjob.go:60
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":4,"skipped":39,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 41 lines ...
• [SLOW TEST:10.234 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":9,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:12.280: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 74 lines ...
• [SLOW TEST:10.303 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/configmap_volume.go:72
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":53,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:33:16.731: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
STEP: Deleting the previously created pod
Dec 17 09:34:06.034: INFO: Deleting pod "pvc-volume-tester-mnqkg" in namespace "csi-mock-volumes-2105"
Dec 17 09:34:06.042: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mnqkg" to be fully deleted
STEP: Checking CSI driver logs
Dec 17 09:34:14.063: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2105","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-b568fceb-6a43-41fa-b94f-81ee0858d087","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-b568fceb-6a43-41fa-b94f-81ee0858d087"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2105","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-2105","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-2105","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-2105","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-b568fceb-6a43-41fa-b94f-81ee0858d087","storage.kubernetes.io/csiProvisionerIdentity":"1576575205638-8081-csi-mock-csi-mock-volumes-2105"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b568fceb-6a43-41fa-b94f-81ee0858d087/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-b568fceb-6a43-41fa-b94f-81ee0858d087","storage.kubernetes.io/csiProvisionerIdentity":"1576575205638-8081-csi-mock-csi-mock-volumes-2105"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b568fceb-6a43-41fa-b94f-81ee0858d087/globalmount","target_path":"/var/lib/kubelet/pods/bdc0fd2b-caa1-440f-ac2d-8b26115d35ad/volumes/kubernetes.io~csi/pvc-b568fceb-6a43-41fa-b94f-81ee0858d087/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-mnqkg","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-2105","csi.storage.k8s.io/pod.uid":"bdc0fd2b-caa1-440f-ac2d-8b26115d35ad","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-b568fceb-6a43-41fa-b94f-81ee0858d087","storage.kubernetes.io/csiProvisionerIdentity":"1576575205638-8081-csi-mock-csi-mock-volumes-2105"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/bdc0fd2b-caa1-440f-ac2d-8b26115d35ad/volumes/kubernetes.io~csi/pvc-b568fceb-6a43-41fa-b94f-81ee0858d087/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-b568fceb-6a43-41fa-b94f-81ee0858d087/globalmount"},"Response":{},"Error":""}

Dec 17 09:34:14.063: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2105
Dec 17 09:34:14.063: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: bdc0fd2b-caa1-440f-ac2d-8b26115d35ad
Dec 17 09:34:14.063: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Dec 17 09:34:14.063: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Dec 17 09:34:14.063: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-mnqkg
... skipping 43 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    should be passed when podInfoOnMount=true
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":7,"skipped":81,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 43 lines ...
STEP: Deleting the previously created pod
Dec 17 09:34:03.922: INFO: Deleting pod "pvc-volume-tester-g4f58" in namespace "csi-mock-volumes-197"
Dec 17 09:34:03.949: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g4f58" to be fully deleted
STEP: Checking CSI driver logs
Dec 17 09:34:12.051: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-197","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-197","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-197","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-197","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-197","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd","storage.kubernetes.io/csiProvisionerIdentity":"1576575229484-8081-csi-mock-csi-mock-volumes-197"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd","storage.kubernetes.io/csiProvisionerIdentity":"1576575229484-8081-csi-mock-csi-mock-volumes-197"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd/globalmount","target_path":"/var/lib/kubelet/pods/d5f929fd-efb5-402d-8c33-35b9709c9499/volumes/kubernetes.io~csi/pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd","storage.kubernetes.io/csiProvisionerIdentity":"1576575229484-8081-csi-mock-csi-mock-volumes-197"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d5f929fd-efb5-402d-8c33-35b9709c9499/volumes/kubernetes.io~csi/pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd/globalmount"},"Response":{},"Error":""}

Dec 17 09:34:12.051: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-g4f58
Dec 17 09:34:12.051: INFO: Deleting pod "pvc-volume-tester-g4f58" in namespace "csi-mock-volumes-197"
STEP: Deleting claim pvc-fdmh6
Dec 17 09:34:12.100: INFO: Waiting up to 2m0s for PersistentVolume pvc-57a76a94-44c9-4433-bb5c-a2bd8d9017cd to get deleted
... skipping 38 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    should not be passed when CSIDriver does not exist
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":5,"skipped":45,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 35 lines ...
  test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:18.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4736" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:19.152: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 50 lines ...
• [SLOW TEST:8.136 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:12.973 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":13,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:23.595: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 36 lines ...
• [SLOW TEST:18.621 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":10,"skipped":106,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:25.093: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 104 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:25.427: INFO: Only supported for providers [aws] (not skeleton)
... skipping 75 lines ...
• [SLOW TEST:10.178 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:34
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:26.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-1407" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":9,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:26.719: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 51 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":75,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:12.296: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:685
STEP: creating the pod
Dec 17 09:34:12.341: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Dec 17 09:34:28.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8317" for this suite.


• [SLOW TEST:15.847 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:34:28.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2383" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":8,"skipped":81,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:28.225: INFO: Driver gluster doesn't support ntfs -- skipping
... skipping 183 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":9,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:33.821: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:34:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 10 lines ...
      test/e2e/storage/testsuites/volumes.go:191

      Only supported for providers [openstack] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1010
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:32:03.459: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:150.911 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 58 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":8,"skipped":83,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 58 lines ...
• [SLOW TEST:10.157 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":61,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:35.607: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename zone-support
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:35.765: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-18ba1ad7-8c52-4f1e-84f6-bde5fc36295b
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:175
Dec 17 09:34:35.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4414" for this suite.
... skipping 31 lines ...
• [SLOW TEST:8.247 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:57
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":47,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:160
    should be able to handle large requests: udp
    test/e2e/network/networking.go:305
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":6,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:36.464: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 52 lines ...
• [SLOW TEST:17.743 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  test/e2e/network/service.go:938
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":7,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:36.909: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Dec 17 09:34:36.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 67 lines ...
test/e2e/common/networking.go:26
  Granular Checks: Pods
  test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 89 lines ...
• [SLOW TEST:7.129 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 119 lines ...
• [SLOW TEST:9.139 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":8,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:46.064: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 47 lines ...
• [SLOW TEST:10.262 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:76
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:46.743: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 61 lines ...
• [SLOW TEST:51.411 seconds]
[sig-network] Networking
test/e2e/network/framework.go:23
  should check kube-proxy urls
  test/e2e/network/networking.go:147
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":7,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:48.182: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Dec 17 09:34:48.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 163 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:160
    should update endpoints: http
    test/e2e/network/networking.go:216
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":6,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:34:49.957: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 92 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":88,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:30.145 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":73,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:8.131 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":83,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 121 lines ...
• [SLOW TEST:21.125 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":9,"skipped":89,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 59 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":10,"skipped":43,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 98 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":57,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:35.182: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 57 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:03.998: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 46 lines ...
• [SLOW TEST:14.129 seconds]
[sig-node] RuntimeClass
test/e2e/common/runtimeclass.go:39
  should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  test/e2e/common/runtimeclass.go:55
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":7,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 66 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:04.376: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 125 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":5,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 64 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":59,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 57 lines ...
• [SLOW TEST:16.279 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":91,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:07.356: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      test/e2e/storage/testsuites/base.go:688
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":4,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:30.003: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 50 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
test/e2e/common/networking.go:26
  Granular Checks: Pods
  test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":78,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 76 lines ...
• [SLOW TEST:18.186 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  test/e2e/apps/disruption.go:200
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":10,"skipped":109,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":64,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:42.679: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 88 lines ...
• [SLOW TEST:12.170 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":15,"skipped":77,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:35:00.508: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:16.353 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":77,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:16.868: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 51 lines ...
• [SLOW TEST:16.272 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":59,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:12.033 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:17.726: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 128 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:35:18.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9310" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:18.354: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
• [SLOW TEST:73.567 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  test/e2e/apps/cronjob.go:194
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":8,"skipped":41,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:18.894: INFO: Only supported for providers [aws] (not skeleton)
... skipping 92 lines ...
    Only supported for node OS distro [gci ubuntu custom] (not debian)

    test/e2e/common/volumes.go:65
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:35.873: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 100 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:687
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-1394
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1394 to expose endpoints map[]
Dec 17 09:34:55.783: INFO: Get endpoints failed (3.704673ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 17 09:34:56.788: INFO: successfully validated that service multi-endpoint-test in namespace services-1394 exposes endpoints map[] (1.008422627s elapsed)
STEP: Creating pod pod1 in namespace services-1394
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1394 to expose endpoints map[pod1:[100]]
Dec 17 09:35:00.909: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.114215173s elapsed, will retry)
Dec 17 09:35:05.980: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.185342689s elapsed, will retry)
Dec 17 09:35:11.024: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (14.229089809s elapsed, will retry)
... skipping 19 lines ...
• [SLOW TEST:26.634 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":10,"skipped":92,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:22.324: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 40 lines ...
• [SLOW TEST:14.290 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":55,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":13,"skipped":64,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:35:13.622: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:11.130 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":14,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:8.361 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/configmap_volume.go:107
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":78,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":9,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:45.661: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 56 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":10,"skipped":96,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":11,"skipped":113,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:34:43.395: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:160
    should function for endpoint-Service: http
    test/e2e/network/networking.go:198
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http","total":-1,"completed":12,"skipped":113,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:29.516: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:175
Dec 17 09:35:29.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 75 lines ...
• [SLOW TEST:48.265 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  test/e2e/storage/pvc_protection.go:106
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":5,"skipped":18,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:29.791: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 46 lines ...
• [SLOW TEST:18.504 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":98,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:30.854: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:126.235 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  test/e2e/apps/cronjob.go:171
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":5,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":51,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:34.365: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 23 lines ...
STEP: Creating a kubernetes client
Dec 17 09:35:34.371: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pod-disks
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  test/e2e/storage/pd.go:73
[It] should be able to delete a non-existent PD without error
  test/e2e/storage/pd.go:446
Dec 17 09:35:34.418: INFO: Only supported for providers [gce] (not skeleton)
[AfterEach] [sig-storage] Pod Disks
  test/e2e/framework/framework.go:175
Dec 17 09:35:34.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-4256" for this suite.


S [SKIPPING] [0.059 seconds]
[sig-storage] Pod Disks
test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [It]
  test/e2e/storage/pd.go:446

  Only supported for providers [gce] (not skeleton)

  test/e2e/storage/pd.go:447
------------------------------
... skipping 2 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:35:34.434: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-b6bffcc7-ddc7-40bb-8aa5-f1cfe2592966
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Dec 17 09:35:34.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4239" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":12,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 59 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":8,"skipped":81,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":17,"skipped":81,"failed":0}
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:35:27.018: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
• [SLOW TEST:11.117 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":18,"skipped":81,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:35:38.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-479" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:38.343: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 226 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":6,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 67 lines ...
• [SLOW TEST:10.164 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:41.026: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:175
Dec 17 09:35:41.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
• [SLOW TEST:20.679 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":99,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:14.208 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":126,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:10.194 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:44.686: INFO: Only supported for providers [azure] (not skeleton)
... skipping 90 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":113,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Lease
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 6 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:35:44.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-4069" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":13,"skipped":114,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":30,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:45.387: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 51 lines ...
• [SLOW TEST:10.155 seconds]
[sig-storage] Projected combined
test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":83,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:45.750: INFO: Driver gluster doesn't support ext3 -- skipping
... skipping 48 lines ...
      Driver "local" does not provide raw block - skipping

      test/e2e/storage/testsuites/volumes.go:99
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:35:16.274: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 59 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:175
Dec 17 09:35:46.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9415" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":10,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 66 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":60,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-windows] Windows volume mounts 
  test/e2e/windows/framework.go:28
Dec 17 09:35:47.028: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 109 lines ...
test/e2e/storage/utils/framework.go:23
  ConfigMap
  test/e2e/storage/volumes.go:45
    should be mountable
    test/e2e/storage/volumes.go:46
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":9,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:48.196: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 74 lines ...
• [SLOW TEST:12.149 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":80,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":11,"skipped":39,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Dec 17 09:35:40.782: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:16.210 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":39,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:57.012: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 55 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is root
    test/e2e/common/empty_dir.go:50
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":14,"skipped":127,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:20.233 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":113,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Dec 17 09:35:58.620: INFO: Driver local doesn't support ntfs -- skipping
... skipping 64 lines ...
      test/e2e/storage/testsuites/volumes.go:150

      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2019-12-17T09:35:59Z"}