This job view page is being replaced by Spyglass soon. Check out the new job view.
PRMorrisLaw: move KubectlCmd out of utils into its own package
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-11-22 03:19
Elapsed15m10s
Revision769e6e0e55a9d700be0a341e3bbafb81f2aa5598
Refs 84613

No Test Failures!


Error lines from build-log.txt

... skipping 188 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.4:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.4
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.4:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 112 lines ...
I1122 03:23:46.008983     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s  in 0 milliseconds
I1122 03:23:46.508920     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s  in 0 milliseconds
I1122 03:23:47.008983     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s  in 0 milliseconds
I1122 03:23:47.508954     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s  in 0 milliseconds
I1122 03:23:48.008927     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s  in 0 milliseconds
I1122 03:23:48.509050     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s  in 0 milliseconds
I1122 03:23:53.041498     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s 500 Internal Server Error in 4032 milliseconds
I1122 03:23:53.511560     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I1122 03:23:54.012161     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
I1122 03:23:54.510565     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
[apiclient] All control plane components are healthy after 11.003958 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1122 03:23:55.011357     133 round_trippers.go:443] GET https://172.17.0.4:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I1122 03:23:55.011498     133 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I1122 03:23:55.018660     133 round_trippers.go:443] POST https://172.17.0.4:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
I1122 03:23:55.023748     133 round_trippers.go:443] POST https://172.17.0.4:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds
... skipping 114 lines ...
I1122 03:23:59.871904     330 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I1122 03:23:59.872002     330 checks.go:432] validating if the connectivity type is via proxy or direct
I1122 03:23:59.872039     330 join.go:441] [preflight] Discovering cluster-info
I1122 03:23:59.872125     330 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:23:59.872640     330 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:23:59.881968     330 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 9 milliseconds
I1122 03:23:59.883219     330 token.go:191] [discovery] Failed to connect to API Server "172.17.0.4:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1122 03:24:04.883423     330 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:24:04.883896     330 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:24:04.886413     330 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 2 milliseconds
I1122 03:24:04.886828     330 token.go:191] [discovery] Failed to connect to API Server "172.17.0.4:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1122 03:24:09.887060     330 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:24:09.887532     330 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:24:09.889612     330 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 1 milliseconds
I1122 03:24:09.890213     330 token.go:191] [discovery] Failed to connect to API Server "172.17.0.4:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1122 03:24:14.890396     330 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:24:14.891698     330 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:24:14.896976     330 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 4 milliseconds
I1122 03:24:14.899172     330 token.go:103] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.4:6443"
I1122 03:24:14.899570     330 token.go:194] [discovery] Successfully established connection with API Server "172.17.0.4:6443"
I1122 03:24:14.899881     330 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
... skipping 95 lines ...
I1122 03:23:59.861675     328 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I1122 03:23:59.861763     328 checks.go:432] validating if the connectivity type is via proxy or direct
I1122 03:23:59.861852     328 join.go:441] [preflight] Discovering cluster-info
I1122 03:23:59.862027     328 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:23:59.862602     328 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:23:59.876511     328 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 13 milliseconds
I1122 03:23:59.877889     328 token.go:191] [discovery] Failed to connect to API Server "172.17.0.4:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1122 03:24:04.878108     328 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:24:04.878958     328 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:24:04.882172     328 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 2 milliseconds
I1122 03:24:04.882816     328 token.go:191] [discovery] Failed to connect to API Server "172.17.0.4:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1122 03:24:09.883260     328 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:24:09.883992     328 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:24:09.886830     328 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 2 milliseconds
I1122 03:24:09.887342     328 token.go:191] [discovery] Failed to connect to API Server "172.17.0.4:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1122 03:24:14.887949     328 token.go:188] [discovery] Trying to connect to API Server "172.17.0.4:6443"
I1122 03:24:14.888693     328 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.4:6443"
I1122 03:24:14.892492     328 round_trippers.go:443] GET https://172.17.0.4:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s 200 OK in 3 milliseconds
I1122 03:24:14.894341     328 token.go:103] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.4:6443"
I1122 03:24:14.894374     328 token.go:194] [discovery] Successfully established connection with API Server "172.17.0.4:6443"
I1122 03:24:14.894404     328 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
... skipping 95 lines ...
Will run 4814 specs

Running in parallel across 25 nodes

Nov 22 03:25:07.814: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:25:07.818: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Nov 22 03:25:07.838: INFO: Condition Ready of node kind-worker is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Nov 22 03:25:07.838: INFO: Condition Ready of node kind-worker2 is false instead of true. Reason: KubeletNotReady, message: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Nov 22 03:25:07.838: INFO: Unschedulable nodes:
Nov 22 03:25:07.838: INFO: -> kind-worker Ready=false Network=false Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master
Nov 22 03:25:07.838: INFO: -> kind-worker2 Ready=false Network=false Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master
Nov 22 03:25:07.838: INFO: ================================
Nov 22 03:25:37.841: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Nov 22 03:25:37.872: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
... skipping 621 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      test/e2e/storage/testsuites/base.go:697
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:25:38.328: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:25:40.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-250" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 42 lines ...
• [SLOW TEST:6.741 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:25:44.825: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 109 lines ...
• [SLOW TEST:9.106 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 49 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl patch
  test/e2e/kubectl/kubectl.go:1520
    should add annotations for pods in rc  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 7 lines ...
  test/e2e/common/security_context.go:210
Nov 22 03:25:40.185: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2" in namespace "security-context-test-2858" to be "success or failure"
Nov 22 03:25:40.246: INFO: Pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 60.436196ms
Nov 22 03:25:42.284: INFO: Pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098231371s
Nov 22 03:25:44.288: INFO: Pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102487105s
Nov 22 03:25:46.294: INFO: Pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109071419s
Nov 22 03:25:48.305: INFO: Pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2": Phase="Failed", Reason="", readiness=false. Elapsed: 8.119980952s
Nov 22 03:25:48.305: INFO: Pod "busybox-readonly-true-47a551e5-ae19-4129-afae-05cca1cc8ad2" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:150
Nov 22 03:25:48.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2858" for this suite.

... skipping 3 lines ...
test/e2e/framework/framework.go:629
  When creating a pod with readOnlyRootFilesystem
  test/e2e/common/security_context.go:164
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:210
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:25:48.322: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 41 lines ...
• [SLOW TEST:11.283 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:12.021 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:9.667 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 38 lines ...
Nov 22 03:25:46.185: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-5788-crds.spec'
Nov 22 03:25:46.404: INFO: stderr: ""
Nov 22 03:25:46.404: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5788-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Nov 22 03:25:46.404: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-5788-crds.spec.bars'
Nov 22 03:25:46.629: INFO: stderr: ""
Nov 22 03:25:46.629: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5788-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Nov 22 03:25:46.629: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-5788-crds.spec.bars2'
Nov 22 03:25:46.869: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:150
Nov 22 03:25:50.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7339" for this suite.
... skipping 2 lines ...
• [SLOW TEST:12.588 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:25:50.676: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 42 lines ...
• [SLOW TEST:7.507 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 35 lines ...
• [SLOW TEST:18.845 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:25:56.843: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 116 lines ...
• [SLOW TEST:19.380 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  test/e2e/apimachinery/resource_quota.go:559
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:25:57.463: INFO: Only supported for providers [aws] (not skeleton)
... skipping 115 lines ...
• [SLOW TEST:8.213 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 167 lines ...
• [SLOW TEST:20.775 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  test/e2e/storage/pvc_protection.go:107
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 56 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:226
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 29 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 55 lines ...
      test/e2e/storage/testsuites/volumes.go:150

      Driver "local" does not provide raw block - skipping

      test/e2e/storage/testsuites/volumes.go:99
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:25:44.861: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
• [SLOW TEST:22.138 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  test/e2e/apps/deployment.go:111
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":2,"skipped":9,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 60 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 62 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-api-machinery] Generated clientset
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:09.368: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename clientset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:26:09.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-9527" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":2,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 88 lines ...
• [SLOW TEST:14.106 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:09.927: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 154 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl replace
  test/e2e/kubectl/kubectl.go:1874
    should update a single-container pod's image  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 75 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:26:10.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-853" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:10.731: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 70 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:12.602: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 145 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:14.177 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 50 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl copy
  test/e2e/kubectl/kubectl.go:1401
    should copy a file from a running Pod
    test/e2e/kubectl/kubectl.go:1420
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":2,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "services-4869" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:143

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:13.999: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:26:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-545" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob","total":-1,"completed":4,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 61 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 171 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:328
    should scale a replication controller  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 94 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 17 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:26:18.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3513" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:18.687: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 47 lines ...
• [SLOW TEST:8.157 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:21.099: INFO: Driver local doesn't support ntfs -- skipping
... skipping 49 lines ...
• [SLOW TEST:8.153 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 32 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  test/e2e/kubectl/kubectl.go:1033
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    test/e2e/kubectl/kubectl.go:1078
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 65 lines ...
• [SLOW TEST:10.141 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:31.372: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:34.482: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename zone-support
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
• [SLOW TEST:18.198 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:36.890: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:26:36.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 127 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:40.717: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 58 lines ...
• [SLOW TEST:50.408 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:41.096: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:26:41.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 80 lines ...
      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":30,"failed":0}
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:17.843: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:24.096 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  test/e2e/apps/job.go:42
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":3,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:41.942: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 15 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:09.601: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
• [SLOW TEST:32.502 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  test/e2e/network/dns.go:454
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":4,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 210 lines ...
Nov 22 03:26:30.587: INFO: Deleting pod "pvc-volume-tester-25m4x" in namespace "csi-mock-volumes-4256"
Nov 22 03:26:30.598: INFO: Wait up to 5m0s for pod "pvc-volume-tester-25m4x" to be fully deleted
WARNING: pod log: pvc-volume-tester-25m4x/volume-tester: pods "pvc-volume-tester-25m4x" not found
STEP: Checking CSI driver logs
Nov 22 03:26:40.626: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4256","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4256","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4256","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4256","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-4256","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f","storage.kubernetes.io/csiProvisionerIdentity":"1574393159422-8081-csi-mock-csi-mock-volumes-4256"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-4256","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f","storage.kubernetes.io/csiProvisionerIdentity":"1574393159422-8081-csi-mock-csi-mock-volumes-4256"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f","storage.kubernetes.io/csiProvisionerIdentity":"1574393159422-8081-csi-mock-csi-mock-volumes-4256"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f/globalmount","target_path":"/var/lib/kubelet/pods/e7cb8a9f-e4f2-4353-852a-b6c55be23d25/volumes/kubernetes.io~csi/pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f","storage.kubernetes.io/csiProvisionerIdentity":"1574393159422-8081-csi-mock-csi-mock-volumes-4256"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/e7cb8a9f-e4f2-4353-852a-b6c55be23d25/volumes/kubernetes.io~csi/pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e7cb8a9f-e4f2-4353-852a-b6c55be23d25/volumes/kubernetes.io~csi/pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f/globalmount"},"Response":{},"Error":""}

Nov 22 03:26:40.626: INFO: Found NodeUnpublishVolume: {Method:/csi.v1.Node/NodeUnpublishVolume Request:{VolumeContext:map[]}}
STEP: Deleting pod pvc-volume-tester-25m4x
Nov 22 03:26:40.626: INFO: Deleting pod "pvc-volume-tester-25m4x" in namespace "csi-mock-volumes-4256"
STEP: Deleting claim pvc-f4ldg
Nov 22 03:26:40.640: INFO: Waiting up to 2m0s for PersistentVolume pvc-f17c7db7-0ad1-41f5-b21f-2a625d2e500f to get deleted
... skipping 37 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    should not be passed when CSIDriver does not exist
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:42.987: INFO: Driver local doesn't support ext4 -- skipping
... skipping 80 lines ...
• [SLOW TEST:28.367 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  test/e2e/storage/pvc_protection.go:119
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":4,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 56 lines ...
• [SLOW TEST:33.370 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:43.387: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 45 lines ...
test/e2e/framework/framework.go:629
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:44.663: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 49 lines ...
• [SLOW TEST:6.204 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:49.622: INFO: Only supported for providers [aws] (not skeleton)
... skipping 83 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:26:49.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5414" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:9.104 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 31 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:629
    should have a working scale subresource [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 22 lines ...
  test/e2e/common/runtime.go:38
    on terminated container
    test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 49 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should function for pod-Service: http
    test/e2e/network/networking.go:163
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: http","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:56.737: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 49 lines ...
• [SLOW TEST:79.861 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  test/e2e/apps/cronjob.go:194
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":20,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 60 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:16.238 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:26:59.260: INFO: Driver emptydir doesn't support ntfs -- skipping
... skipping 48 lines ...
• [SLOW TEST:8.173 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:22.795: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 72 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":26,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 47 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":3,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:03.750: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 97 lines ...
Nov 22 03:27:03.804: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.045 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  test/e2e/storage/persistent_volumes-gce.go:124

  Only supported for providers [gce gke] (not skeleton)

  test/e2e/storage/persistent_volumes-gce.go:83
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:03.808: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 67 lines ...
• [SLOW TEST:66.151 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent
  test/e2e/apps/cronjob.go:139
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:04.488: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:27:04.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:08.529: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 80 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":39,"failed":0}
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:08.620: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename multi-az
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
test/e2e/framework/framework.go:629
  when scheduling a busybox command that always fails in a pod
  test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Flexvolumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 149 lines ...
• [SLOW TEST:72.881 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly]
  test/e2e/network/service.go:1821
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly]","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 96 lines ...
• [SLOW TEST:6.153 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 21 lines ...
test/e2e/framework/framework.go:629
  When creating a container with runAsNonRoot
  test/e2e/common/security_context.go:97
    should run with an explicit non-root user ID [LinuxOnly]
    test/e2e/common/security_context.go:122
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":71,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:09.997: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 75 lines ...
• [SLOW TEST:14.185 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 150 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":2,"skipped":1,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:15.084: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 39 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:27:15.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2675" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":3,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:15.302: INFO: Only supported for providers [azure] (not skeleton)
... skipping 50 lines ...
• [SLOW TEST:8.176 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/downwardapi_volume.go:90
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:18.147: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:27:18.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 18 lines ...
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:31.381: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:113
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:150
Nov 22 03:27:19.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7873" for this suite.


• [SLOW TEST:48.088 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  test/e2e/apps/job.go:113
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":4,"skipped":51,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:09.346: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
• [SLOW TEST:10.286 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:19.634: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:27:19.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 41 lines ...
• [SLOW TEST:10.222 seconds]
[sig-node] RuntimeClass
test/e2e/common/runtimeclass.go:40
  should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  test/e2e/common/runtimeclass.go:56
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":8,"skipped":104,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
  test/e2e/common/runtime.go:38
    when running a container with a new image
    test/e2e/common/runtime.go:263
      should not be able to pull from private registry without secret [NodeConformance]
      test/e2e/common/runtime.go:380
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 70 lines ...
• [SLOW TEST:6.403 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/downwardapi_volume.go:105
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:21.720: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 15 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:258
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes:vsphere
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:21.528: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
Nov 22 03:27:21.792: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [0.264 seconds]
[sig-storage] PersistentVolumes:vsphere
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach [BeforeEach]
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:163

  Only supported for providers [vsphere] (not skeleton)

  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 37 lines ...
• [SLOW TEST:28.636 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 14 lines ...
Nov 22 03:27:28.404: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.489 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  test/e2e/storage/persistent_volumes-gce.go:139

  Only supported for providers [gce gke] (not skeleton)

  test/e2e/storage/persistent_volumes-gce.go:83
------------------------------
... skipping 186 lines ...
• [SLOW TEST:17.132 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:29.410: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 146 lines ...
test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":2,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 44 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:30.677: INFO: Driver local doesn't support ntfs -- skipping
... skipping 123 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    new files should be created with FSGroup ownership when container is root
    test/e2e/common/empty_dir.go:50
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":9,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:31.013: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:27:31.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
• [SLOW TEST:9.438 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:31.250: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 48 lines ...
Nov 22 03:27:00.133: INFO: Unable to read jessie_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:00.138: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:00.143: INFO: Unable to read jessie_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:00.147: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:00.150: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:00.155: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:00.180: INFO: Lookups using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6138 wheezy_tcp@dns-test-service.dns-6138 wheezy_udp@dns-test-service.dns-6138.svc wheezy_tcp@dns-test-service.dns-6138.svc wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6138 jessie_tcp@dns-test-service.dns-6138 jessie_udp@dns-test-service.dns-6138.svc jessie_tcp@dns-test-service.dns-6138.svc jessie_udp@_http._tcp.dns-test-service.dns-6138.svc jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc]

Nov 22 03:27:05.185: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.190: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.205: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
... skipping 5 lines ...
Nov 22 03:27:05.263: INFO: Unable to read jessie_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.269: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.273: INFO: Unable to read jessie_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.285: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.296: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.300: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:05.336: INFO: Lookups using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6138 wheezy_tcp@dns-test-service.dns-6138 wheezy_udp@dns-test-service.dns-6138.svc wheezy_tcp@dns-test-service.dns-6138.svc wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6138 jessie_tcp@dns-test-service.dns-6138 jessie_udp@dns-test-service.dns-6138.svc jessie_tcp@dns-test-service.dns-6138.svc jessie_udp@_http._tcp.dns-test-service.dns-6138.svc jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc]

Nov 22 03:27:10.192: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.196: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.200: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.206: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.211: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
... skipping 5 lines ...
Nov 22 03:27:10.299: INFO: Unable to read jessie_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.305: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.311: INFO: Unable to read jessie_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.317: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.322: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.328: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:10.361: INFO: Lookups using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6138 wheezy_tcp@dns-test-service.dns-6138 wheezy_udp@dns-test-service.dns-6138.svc wheezy_tcp@dns-test-service.dns-6138.svc wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6138 jessie_tcp@dns-test-service.dns-6138 jessie_udp@dns-test-service.dns-6138.svc jessie_tcp@dns-test-service.dns-6138.svc jessie_udp@_http._tcp.dns-test-service.dns-6138.svc jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc]

Nov 22 03:27:15.204: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.212: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.270: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
... skipping 5 lines ...
Nov 22 03:27:15.394: INFO: Unable to read jessie_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.422: INFO: Unable to read jessie_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.435: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.443: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.448: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:15.493: INFO: Lookups using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6138 wheezy_tcp@dns-test-service.dns-6138 wheezy_udp@dns-test-service.dns-6138.svc wheezy_tcp@dns-test-service.dns-6138.svc wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6138 jessie_tcp@dns-test-service.dns-6138 jessie_udp@dns-test-service.dns-6138.svc jessie_tcp@dns-test-service.dns-6138.svc jessie_udp@_http._tcp.dns-test-service.dns-6138.svc jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc]

Nov 22 03:27:20.195: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.201: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.214: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.228: INFO: Unable to read wheezy_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
... skipping 5 lines ...
Nov 22 03:27:20.500: INFO: Unable to read jessie_udp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.556: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138 from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.610: INFO: Unable to read jessie_udp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.698: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:20.720: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:21.027: INFO: Lookups using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6138 wheezy_tcp@dns-test-service.dns-6138 wheezy_udp@dns-test-service.dns-6138.svc wheezy_tcp@dns-test-service.dns-6138.svc wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6138 jessie_tcp@dns-test-service.dns-6138 jessie_udp@dns-test-service.dns-6138.svc jessie_tcp@dns-test-service.dns-6138.svc jessie_udp@_http._tcp.dns-test-service.dns-6138.svc jessie_tcp@_http._tcp.dns-test-service.dns-6138.svc]

Nov 22 03:27:25.304: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:25.330: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc from pod dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718: the server could not find the requested resource (get pods dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718)
Nov 22 03:27:26.505: INFO: Lookups using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-6138.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6138.svc]

Nov 22 03:27:31.077: INFO: DNS probes using dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:41.436 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":5,"skipped":38,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:19.474: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
• [SLOW TEST:16.013 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:26:42.683: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 54 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:100
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:333
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":3,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:35.587: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 83 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:100
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:333
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:39.132: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 35 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:27:39.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7942" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":6,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:27:39.516: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:27:39.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 50 lines ...
• [SLOW TEST:11.138 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":4,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:20.719 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:629
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:09.837: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:634
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:150
Nov 22 03:27:45.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4227" for this suite.


• [SLOW TEST:36.142 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 67 lines ...
test/e2e/network/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  test/e2e/network/service.go:978
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":3,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:45.985: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:634
STEP: Creating configMap that has name configmap-test-emptyKey-6f600fba-e3f9-4c88-8f0e-e7acea32cad7
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:150
Nov 22 03:27:46.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2216" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":22,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:29.544: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
• [SLOW TEST:18.657 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_downwardapi.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:18.646 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:629
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 43 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":122,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:01.962: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 49 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] version v1
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 354 lines ...
test/e2e/network/framework.go:23
  version v1
  test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 78 lines ...
• [SLOW TEST:11.084 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
  test/e2e/apimachinery/resource_quota.go:507
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":4,"skipped":11,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 58 lines ...
• [SLOW TEST:12.155 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/configmap_volume.go:56
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":126,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:01.743: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:28:01.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 61 lines ...
test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 68 lines ...
• [SLOW TEST:86.193 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently
  test/e2e/apps/cronjob.go:60
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":4,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 63 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl alpha client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/kubectl/kubectl.go:235
Nov 22 03:28:10.671: INFO: Could not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc0020d3200), Code:404}}
[AfterEach] Kubectl run CronJob
  test/e2e/kubectl/kubectl.go:231
Nov 22 03:28:10.672: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-6507'
Nov 22 03:28:10.844: INFO: rc: 1
Nov 22 03:28:10.844: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-6507:\nCommand stdout:\n\nstderr:\nError from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-6507:
    Command stdout:
    
    stderr:
    Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found
    
    error:
    exit status 1
occurred
[AfterEach] [sig-cli] Kubectl alpha client
  test/e2e/framework/framework.go:150
Nov 22 03:28:10.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6507" for this suite.
... skipping 182 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:22.198 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 126 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:100
      should support two pods which share the same volume
      test/e2e/storage/testsuites/ephemeral.go:139
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume","total":-1,"completed":3,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 67 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 73 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should return command exit codes
    test/e2e/kubectl/kubectl.go:624
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":4,"skipped":26,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:15.985 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:27:46.098: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:22.477: INFO: Driver local doesn't support ext4 -- skipping
... skipping 42 lines ...
• [SLOW TEST:87.660 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":51,"failed":0}
[BeforeEach] [sig-network] [sig-windows] Networking
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:28:22.925: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 151 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":26,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
  Only supported for providers [gce gke aws] (not skeleton)

  test/e2e/scheduling/ubernetes_lite.go:43
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:28:17.482: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:7.129 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}

SSSSSSSSS
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:28:00.263: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
• [SLOW TEST:25.590 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":8,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:28:26.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-799" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":9,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 68 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:225
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":58,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:29.119: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 119 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":46,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  test/e2e/kubectl/portforward.go:464
    should support forwarding over websockets
    test/e2e/kubectl/portforward.go:480
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":8,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 63 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:31.694: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 149 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:328
    should create and stop a replication controller  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":8,"skipped":52,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 125 lines ...
• [SLOW TEST:7.592 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  should ensure a single API token exists
  test/e2e/auth/service_accounts.go:47
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":6,"skipped":44,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:39.326: INFO: Driver local doesn't support ext3 -- skipping
... skipping 31 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:28:39.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8350" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":7,"skipped":71,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:28:43.138: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename zone-support
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 74 lines ...
• [SLOW TEST:10.120 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":87,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:28:24.625: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:634
STEP: creating the pod
Nov 22 03:28:24.660: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:150
Nov 22 03:28:44.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1471" for this suite.


• [SLOW TEST:19.835 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:629
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] crictl
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 149 lines ...
• [SLOW TEST:14.144 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:57
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":50,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:14.115 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:629
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  test/e2e/node/security_context.go:68
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":9,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:28:45.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8689" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":-1,"completed":10,"skipped":41,"failed":0}

SSSSS
------------------------------
[BeforeEach] version v1
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 106 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:28:46.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8206" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":-1,"completed":11,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:12.071 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: too few pods, replicaSet, percentage => should not allow an eviction
  test/e2e/apps/disruption.go:149
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage =\u003e should not allow an eviction","total":-1,"completed":8,"skipped":74,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 33 lines ...
  test/e2e/kubectl/portforward.go:464
    that expects a client request
    test/e2e/kubectl/portforward.go:465
      should support a client that connects, sends NO DATA, and disconnects
      test/e2e/kubectl/portforward.go:466
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":5,"skipped":39,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:30.188 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  test/e2e/storage/pvc_protection.go:138
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:53.934: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:28:53.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
• [SLOW TEST:36.176 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:629
  should be restarted with a local redirect http liveness probe
  test/e2e/common/container_probe.go:231
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":27,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:55.103: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 85 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":5,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:55.406: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 165 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:100
      should create read/write inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:127
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":12,"skipped":138,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:6.340 seconds]
[sig-api-machinery] Generated clientset
test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  test/e2e/apimachinery/generated_clientset.go:103
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":9,"skipped":79,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:28:57.847: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 88 lines ...
• [SLOW TEST:60.115 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:629
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 64 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 121 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":12,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:14.732 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:629
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 67 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:226
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:10.687: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:29:10.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      test/e2e/storage/testsuites/base.go:697
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:28:06.701: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:64.347 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:629
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:11.066: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 35 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:29:11.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-5930" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":6,"skipped":38,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:11.137: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 34 lines ...
Nov 22 03:29:04.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 03:29:06.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 03:29:08.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990142, loc:(*time.Location)(0x7ce3280)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Nov 22 03:29:11.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:634
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:150
Nov 22 03:29:11.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1603" for this suite.
... skipping 2 lines ...
  test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:9.818 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 52 lines ...
• [SLOW TEST:18.117 seconds]
[sig-storage] EmptyDir wrapper volumes
test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":6,"skipped":91,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:16.390 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":148,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 91 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:29:18.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7495" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":14,"skipped":150,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: Creating a kubernetes client
Nov 22 03:25:38.192: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  test/e2e/apps/cronjob.go:55
[It] should delete successful/failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:233
STEP: Creating a AllowConcurrent cronjob with custom successful-jobs-history-limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
STEP: Removing cronjob
STEP: Creating a AllowConcurrent cronjob with custom failed-jobs-history-limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
STEP: Removing cronjob
[AfterEach] [sig-apps] CronJob
... skipping 2 lines ...
STEP: Destroying namespace "cronjob-3782" for this suite.


• [SLOW TEST:220.541 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete successful/failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:233
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful/failed finished jobs with limit of one job","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 58 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":46,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:12.133 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:629
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:76
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":13,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:12.204 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":50,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 86 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:225
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":39,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:23.994: INFO: Driver local doesn't support ext4 -- skipping
... skipping 74 lines ...
STEP: Creating a kubernetes client
Nov 22 03:29:24.108: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename node-problem-detector
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/node/node_problem_detector.go:49
Nov 22 03:29:24.159: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''
[AfterEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/framework/framework.go:150
Nov 22 03:29:24.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-problem-detector-8625" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.068 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
test/e2e/framework/framework.go:629
  should run without error [BeforeEach]
  test/e2e/node/node_problem_detector.go:57

  No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''

  test/e2e/node/node_problem_detector.go:50
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:24.178: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 85 lines ...
Nov 22 03:29:10.486: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:10.607: INFO: Exec stderr: ""
Nov 22 03:29:20.620: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7347"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7347"/host; echo host > "/var/lib/kubelet/mount-propagation-7347"/host/file] Namespace:mount-propagation-7347 PodName:hostexec-kind-worker-5z7tn ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Nov 22 03:29:20.620: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:20.778: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7347 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:20.778: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:20.964: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Nov 22 03:29:20.969: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7347 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:20.969: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.130: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:21.133: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7347 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.133: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.253: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:21.258: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7347 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.258: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.380: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:21.383: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7347 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.383: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.520: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Nov 22 03:29:21.522: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7347 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.522: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.646: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Nov 22 03:29:21.650: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7347 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.650: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.792: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Nov 22 03:29:21.796: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7347 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.796: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:21.951: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:21.954: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7347 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:21.954: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:22.148: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:22.152: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7347 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:22.152: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:22.311: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Nov 22 03:29:22.321: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7347 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:22.321: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:22.517: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:22.521: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7347 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:22.521: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:22.698: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:22.704: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7347 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:22.704: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:22.931: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Nov 22 03:29:22.935: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7347 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:22.935: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:23.104: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:23.108: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7347 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:23.109: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:23.268: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:23.272: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7347 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:23.272: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:23.468: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:23.471: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7347 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:23.471: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:23.638: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:23.641: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7347 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:23.641: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:23.813: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:23.820: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7347 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:23.820: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:23.979: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Nov 22 03:29:23.987: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7347 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:29:23.988: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:24.186: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Nov 22 03:29:24.186: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7347"/master/file` = master] Namespace:mount-propagation-7347 PodName:hostexec-kind-worker-5z7tn ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Nov 22 03:29:24.186: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:24.350: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7347"/slave/file] Namespace:mount-propagation-7347 PodName:hostexec-kind-worker-5z7tn ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Nov 22 03:29:24.350: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:29:24.509: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7347"/host] Namespace:mount-propagation-7347 PodName:hostexec-kind-worker-5z7tn ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Nov 22 03:29:24.509: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 109 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:225
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":19,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Volume limits
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 53 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support sysctls
  test/e2e/common/sysctl.go:67
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:150
... skipping 4 lines ...
• [SLOW TEST:12.112 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
test/e2e/framework/framework.go:629
  should support sysctls
  test/e2e/common/sysctl.go:67
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":7,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:25.680: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 15 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":9,"skipped":70,"failed":0}
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:29:12.177: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
test/e2e/framework/framework.go:629
  when scheduling a busybox command in a pod
  test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:26.338: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:29:26.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      test/e2e/storage/testsuites/base.go:697
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:29:15.353: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 38 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":53,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:29.572: INFO: Driver local doesn't support ntfs -- skipping
... skipping 131 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":66,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 77 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":94,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 49 lines ...
• [SLOW TEST:12.058 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:629
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":51,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:33.576: INFO: Driver local doesn't support ext4 -- skipping
... skipping 42 lines ...
• [SLOW TEST:16.161 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  test/e2e/apps/disruption.go:149
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":15,"skipped":152,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:34.524: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 118 lines ...
test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  test/e2e/storage/csi_mock_volume.go:530
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:545
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":7,"skipped":51,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 206 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":65,"failed":0}
[BeforeEach] [sig-autoscaling] DNS horizontal autoscaling
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:29:35.461: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename dns-autoscaling
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 83 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:629
    should not deadlock when a pod's predecessor fails
    test/e2e/apps/statefulset.go:224
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":7,"skipped":23,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 91 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:39.314: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    volume on tmpfs should have the correct mode using FSGroup
    test/e2e/common/empty_dir.go:70
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":15,"skipped":59,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:39.756: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 189 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:29:39.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2387" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":8,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 14 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:29:39.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2449" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":16,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:40.023: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
• [SLOW TEST:16.191 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:9.606 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:45.175: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:12.195 seconds]
[k8s.io] [sig-node] Events
test/e2e/framework/framework.go:629
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":16,"skipped":161,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:16.198 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":99,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes:vsphere
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 54 lines ...
test/e2e/common/networking.go:26
  Granular Checks: Pods
  test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 32 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  test/e2e/kubectl/kubectl.go:1033
    should create/apply a CR with unknown fields for CRD with no validation schema
    test/e2e/kubectl/kubectl.go:1034
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":8,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 7 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:29:51.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4389" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 36 lines ...
  test/e2e/kubectl/portforward.go:464
    that expects NO client request
    test/e2e/kubectl/portforward.go:474
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:475
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":13,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:52.545: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
• [SLOW TEST:12.191 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should update PodDisruptionBudget status
  test/e2e/apps/disruption.go:61
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update PodDisruptionBudget status","total":-1,"completed":9,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:54.073: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 53 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":35,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:29:25.371: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
• [SLOW TEST:30.100 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 37 lines ...
  test/e2e/kubectl/portforward.go:442
    that expects a client request
    test/e2e/kubectl/portforward.go:443
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:447
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":8,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:23.117 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:58.573: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 154 lines ...
test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  test/e2e/storage/csi_mock_volume.go:240
    should preserve attachment policy when no CSIDriver present
    test/e2e/storage/csi_mock_volume.go:262
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":7,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:29:58.847: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:29:58.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 45 lines ...
• [SLOW TEST:12.130 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:629
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:117
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":17,"skipped":165,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 19 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:29:59.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8024" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":75,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:6.138 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":8,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:10.164 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:629
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:88
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":8,"skipped":35,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 62 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:02.310: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:02.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 61 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":56,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:02.908: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 88 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":51,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:03.650: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 40 lines ...
Nov 22 03:29:43.727: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:43.755: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:43.805: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:43.815: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:43.852: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:43.859: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:43.881: INFO: Lookups using dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local]

Nov 22 03:29:48.886: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.889: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.892: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.895: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.907: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.910: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.914: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.918: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:48.924: INFO: Lookups using dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local]

Nov 22 03:29:53.886: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.890: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.895: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.900: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.911: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.914: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.917: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.921: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:53.929: INFO: Lookups using dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local]

Nov 22 03:29:58.890: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.893: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.899: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.904: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.920: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.925: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.928: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.932: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local from pod dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5: the server could not find the requested resource (get pods dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5)
Nov 22 03:29:58.947: INFO: Lookups using dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5100.svc.cluster.local jessie_udp@dns-test-service-2.dns-5100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5100.svc.cluster.local]

Nov 22 03:30:04.043: INFO: DNS probes using dns-5100/dns-test-fc1952a7-3edd-44b0-8c31-ef87b545cdf5 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:34.732 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":11,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 60 lines ...
test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:05.202: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:05.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
      Driver local doesn't support InlineVolume -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":-1,"completed":10,"skipped":57,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:29:16.934: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  test/e2e/storage/csi_mock_volume.go:419
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":11,"skipped":57,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 28 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":9,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:08.569: INFO: Only supported for providers [aws] (not skeleton)
... skipping 96 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:150
Nov 22 03:30:08.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":10,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:08.624: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:08.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 41 lines ...
• [SLOW TEST:13.355 seconds]
[k8s.io] PrivilegedPod [NodeConformance]
test/e2e/framework/framework.go:629
  should enable privileged commands [LinuxOnly]
  test/e2e/common/privileged.go:49
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":9,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:09.158: INFO: Only supported for providers [azure] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:09.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 63 lines ...
• [SLOW TEST:11.208 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":9,"skipped":101,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:34
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:110
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:150
... skipping 39 lines ...
• [SLOW TEST:12.269 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":82,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:12.004: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 59 lines ...
• [SLOW TEST:5.612 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":12,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 65 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl logs
  test/e2e/kubectl/kubectl.go:1441
    should be able to retrieve and filter logs  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":10,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:15.053: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 35 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:30:15.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5053" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":11,"skipped":86,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 42 lines ...
      Only supported for providers [openstack] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1019
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":8,"skipped":56,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:29:33.546: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":9,"skipped":56,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Volume Placement
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 145 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 80 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":10,"skipped":93,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:16.713: INFO: Driver gluster doesn't support ntfs -- skipping
... skipping 131 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl label
  test/e2e/kubectl/kubectl.go:1360
    should update the label on a resource  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":10,"skipped":105,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:19.568: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 87 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:189
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:19.654: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:19.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 58 lines ...
      test/e2e/storage/testsuites/volumes.go:150

      Only supported for providers [vsphere] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1322
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":9,"skipped":54,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:30:19.436: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
• [SLOW TEST:10.564 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":105,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
Nov 22 03:30:20.872: INFO: stdout: "NAMESPACE      NAME                CONTAINERS   IMAGES          POD LABELS\nkubectl-6642   pt1namelwl4k8hvr9   container9   fedora:latest   pt=01\n"
Nov 22 03:30:20.920: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get replicationcontrollers --all-namespaces'
Nov 22 03:30:21.075: INFO: stderr: ""
Nov 22 03:30:21.075: INFO: stdout: "NAMESPACE                     NAME                                                     DESIRED   CURRENT   READY   AGE\nkubectl-6642                  rc1lwl4k8hvr9                                            1         1         0       1s\nreplication-controller-1172   my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f   1         1         1       20s\n"
Nov 22 03:30:21.130: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get events --all-namespaces'
Nov 22 03:30:21.410: INFO: stderr: ""
Nov 22 03:30:21.410: INFO: stdout: "NAMESPACE                           LAST SEEN   TYPE      REASON                     OBJECT                                                                         MESSAGE\ncontainer-probe-146                 3m19s       Normal    Scheduled                  pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Successfully assigned container-probe-146/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59 to kind-worker2\ncontainer-probe-146                 3m18s       Normal    Pulled                     pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ncontainer-probe-146                 3m18s       Normal    Created                    pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Created container test-webserver\ncontainer-probe-146                 3m17s       Normal    Started                    pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Started container test-webserver\ncontainer-probe-153                 3m38s       Normal    Scheduled                  pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Successfully assigned container-probe-153/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5 to kind-worker\ncontainer-probe-153                 3m38s       Normal    Pulled                     pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-153                 3m38s       Normal    Created                    pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Created container liveness\ncontainer-probe-153                 3m37s       Normal    Started                    pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Started container liveness\ncsi-mock-volumes-9179               39s         Normal    Pulling                    pod/csi-mockplugin-0                                                           Pulling image \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\ncsi-mock-volumes-9179               39s         Normal    Pulled                     pod/csi-mockplugin-0                                                           Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\ncsi-mock-volumes-9179               39s         Normal    Created                    pod/csi-mockplugin-0                                                           Created container csi-provisioner\ncsi-mock-volumes-9179               38s         Normal    Started                    pod/csi-mockplugin-0                                                           Started container csi-provisioner\ncsi-mock-volumes-9179               38s         Normal    Pulling                    pod/csi-mockplugin-0                                                           Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\ncsi-mock-volumes-9179               38s         Normal    Pulled                     pod/csi-mockplugin-0                                                           Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\ncsi-mock-volumes-9179               37s         Normal    Created                    pod/csi-mockplugin-0                                                           Created container driver-registrar\ncsi-mock-volumes-9179               37s         Normal    Started                    pod/csi-mockplugin-0                                                           Started container driver-registrar\ncsi-mock-volumes-9179               37s         Normal    Pulled                     pod/csi-mockplugin-0                                                           Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-9179               37s         Normal    Created                    pod/csi-mockplugin-0                                                           Created container mock\ncsi-mock-volumes-9179               37s         Normal    Started                    pod/csi-mockplugin-0                                                           Started container mock\ncsi-mock-volumes-9179               39s         Normal    Pulling                    pod/csi-mockplugin-attacher-0                                                  Pulling image \"quay.io/k8scsi/csi-attacher:v1.1.0\"\ncsi-mock-volumes-9179               38s         Normal    Pulled                     pod/csi-mockplugin-attacher-0                                                  Successfully pulled image \"quay.io/k8scsi/csi-attacher:v1.1.0\"\ncsi-mock-volumes-9179               38s         Normal    Created                    pod/csi-mockplugin-attacher-0                                                  Created container csi-attacher\ncsi-mock-volumes-9179               38s         Normal    Started                    pod/csi-mockplugin-attacher-0                                                  Started container csi-attacher\ncsi-mock-volumes-9179               40s         Normal    SuccessfulCreate           statefulset/csi-mockplugin-attacher                                            create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-9179               40s         Normal    SuccessfulCreate           statefulset/csi-mockplugin                                                     create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-9179               38s         Normal    ExternalProvisioning       persistentvolumeclaim/pvc-hsvjq                                                waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-9179\" or manually created by system administrator\ncsi-mock-volumes-9179               36s         Normal    Provisioning               persistentvolumeclaim/pvc-hsvjq                                                External provisioner is provisioning volume for claim \"csi-mock-volumes-9179/pvc-hsvjq\"\ncsi-mock-volumes-9179               12s         Warning   ExternalExpanding          persistentvolumeclaim/pvc-hsvjq                                                Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-9179               34s         Normal    SuccessfulAttachVolume     pod/pvc-volume-tester-9s8ks                                                    AttachVolume.Attach succeeded for volume \"pvc-35d7b36d-30c9-4d18-ada1-7015f17fb77d\"\ncsi-mock-volumes-9179               27s         Normal    Pulled                     pod/pvc-volume-tester-9s8ks                                                    Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-9179               27s         Normal    Created                    pod/pvc-volume-tester-9s8ks                                                    Created container volume-tester\ncsi-mock-volumes-9179               27s         Normal    Started                    pod/pvc-volume-tester-9s8ks                                                    Started container volume-tester\ndefault                             6m25s       Normal    Starting                   node/kind-control-plane                                                        Starting kubelet.\ndefault                             6m25s       Warning   CheckLimitsForResolvConf   node/kind-control-plane                                                        Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\ndefault                             6m25s       Normal    NodeHasSufficientMemory    node/kind-control-plane                                                        Node kind-control-plane status is now: NodeHasSufficientMemory\ndefault                             6m25s       Normal    NodeHasNoDiskPressure      node/kind-control-plane                                                        Node kind-control-plane status is now: NodeHasNoDiskPressure\ndefault                             6m25s       Normal    NodeHasSufficientPID       node/kind-control-plane                                                        Node kind-control-plane status is now: NodeHasSufficientPID\ndefault                             6m25s       Normal    NodeAllocatableEnforced    node/kind-control-plane                                                        Updated Node Allocatable limit across pods\ndefault                             6m9s        Normal    RegisteredNode             node/kind-control-plane                                                        Node kind-control-plane event: Registered Node kind-control-plane in Controller\ndefault                             6m2s        Normal    Starting                   node/kind-control-plane                                                        Starting kube-proxy.\ndefault                             5m25s       Normal    NodeReady                  node/kind-control-plane                                                        Node kind-control-plane status is now: NodeReady\ndefault                             5m51s       Normal    NodeHasSufficientMemory    node/kind-worker                                                               Node kind-worker status is now: NodeHasSufficientMemory\ndefault                             5m49s       Normal    RegisteredNode             node/kind-worker                                                               Node kind-worker event: Registered Node kind-worker in Controller\ndefault                             5m43s       Normal    Starting                   node/kind-worker                                                               Starting kube-proxy.\ndefault                             5m50s       Normal    NodeHasSufficientPID       node/kind-worker2                                                              Node kind-worker2 status is now: NodeHasSufficientPID\ndefault                             5m49s       Normal    RegisteredNode             node/kind-worker2                                                              Node kind-worker2 event: Registered Node kind-worker2 in Controller\ndefault                             5m42s       Normal    Starting                   node/kind-worker2                                                              Starting kube-proxy.\ndeployment-649                      1s          Normal    Scheduled                  pod/test-rolling-update-controller-2cl67                                       Successfully assigned deployment-649/test-rolling-update-controller-2cl67 to kind-worker\ndeployment-649                      1s          Normal    SuccessfulCreate           replicaset/test-rolling-update-controller                                      Created pod: test-rolling-update-controller-2cl67\ne2e-kubelet-etc-hosts-2912          2s          Normal    Scheduled                  pod/test-host-network-pod                                                      Successfully assigned e2e-kubelet-etc-hosts-2912/test-host-network-pod to kind-worker2\ne2e-kubelet-etc-hosts-2912          13s         Normal    Scheduled                  pod/test-pod                                                                   Successfully assigned e2e-kubelet-etc-hosts-2912/test-pod to kind-worker\ne2e-kubelet-etc-hosts-2912          10s         Normal    Pulled                     pod/test-pod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-2912          10s         Normal    Created                    pod/test-pod                                                                   Created container busybox-1\ne2e-kubelet-etc-hosts-2912          10s         Normal    Started                    pod/test-pod                                                                   Started container busybox-1\ne2e-kubelet-etc-hosts-2912          10s         Normal    Pulled                     pod/test-pod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-2912          10s         Normal    Created                    pod/test-pod                                                                   Created container busybox-2\ne2e-kubelet-etc-hosts-2912          9s          Normal    Started                    pod/test-pod                                                                   Started container busybox-2\ne2e-kubelet-etc-hosts-2912          9s          Normal    Pulled                     pod/test-pod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-2912          9s          Normal    Created                    pod/test-pod                                                                   Created container busybox-3\ne2e-kubelet-etc-hosts-2912          9s          Normal    Started                    pod/test-pod                                                                   Started container busybox-3\nemptydir-3786                       5s          Normal    Scheduled                  pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Successfully assigned emptydir-3786/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e to kind-worker\nemptydir-3786                       4s          Normal    Pulling                    pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\nemptydir-3786                       3s          Normal    Pulled                     pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\nemptydir-3786                       3s          Normal    Created                    pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Created container test-container\nemptydir-3786                       3s          Normal    Started                    pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Started container test-container\nephemeral-6689                      16s         Normal    Pulling                    pod/csi-hostpath-attacher-0                                                    Pulling image \"quay.io/k8scsi/csi-attacher:v2.0.0\"\nephemeral-6689                      13s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                                    Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.0.0\"\nephemeral-6689                      13s         Normal    Created                    pod/csi-hostpath-attacher-0                                                    Created container csi-attacher\nephemeral-6689                      12s         Normal    Started                    pod/csi-hostpath-attacher-0                                                    Started container csi-attacher\nephemeral-6689                      17s         Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                              create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-6689                      15s         Normal    Pulling                    pod/csi-hostpath-provisioner-0                                                 Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0-rc1\"\nephemeral-6689                      17s         Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                                           create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-6689                      15s         Normal    Pulling                    pod/csi-hostpath-resizer-0                                                     Pulling image \"quay.io/k8scsi/csi-resizer:v0.3.0\"\nephemeral-6689                      17s         Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                               create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-6689                      16s         Normal    Pulling                    pod/csi-hostpathplugin-0                                                       Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-6689                      11s         Normal    Pulled                     pod/csi-hostpathplugin-0                                                       Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-6689                      11s         Normal    Created                    pod/csi-hostpathplugin-0                                                       Created container node-driver-registrar\nephemeral-6689                      11s         Normal    Started                    pod/csi-hostpathplugin-0                                                       Started container node-driver-registrar\nephemeral-6689                      11s         Normal    Pulling                    pod/csi-hostpathplugin-0                                                       Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-6689                      17s         Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                                 create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-6689                      15s         Normal    Pulling                    pod/csi-snapshotter-0                                                          Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2\"\nephemeral-6689                      17s         Normal    SuccessfulCreate           statefulset/csi-snapshotter                                                    create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-6689                      13s         Warning   FailedMount                pod/inline-volume-tester-2s74d                                                 MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-6689 not found in the list of registered CSI drivers\nkube-system                         6m9s        Warning   FailedScheduling           pod/coredns-6955765f44-mxkvk                                                   0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m51s       Warning   FailedScheduling           pod/coredns-6955765f44-mxkvk                                                   0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m25s       Warning   FailedScheduling           pod/coredns-6955765f44-mxkvk                                                   0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m20s       Normal    Scheduled                  pod/coredns-6955765f44-mxkvk                                                   Successfully assigned kube-system/coredns-6955765f44-mxkvk to kind-control-plane\nkube-system                         5m14s       Normal    Pulled                     pod/coredns-6955765f44-mxkvk                                                   Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                         5m13s       Normal    Created                    pod/coredns-6955765f44-mxkvk                                                   Created container coredns\nkube-system                         5m13s       Normal    Started                    pod/coredns-6955765f44-mxkvk                                                   Started container coredns\nkube-system                         6m9s        Warning   FailedScheduling           pod/coredns-6955765f44-v49tc                                                   0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m51s       Warning   FailedScheduling           pod/coredns-6955765f44-v49tc                                                   0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m25s       Warning   FailedScheduling           pod/coredns-6955765f44-v49tc                                                   0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m20s       Normal    Scheduled                  pod/coredns-6955765f44-v49tc                                                   Successfully assigned kube-system/coredns-6955765f44-v49tc to kind-control-plane\nkube-system                         5m14s       Normal    Pulled                     pod/coredns-6955765f44-v49tc                                                   Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                         5m13s       Normal    Created                    pod/coredns-6955765f44-v49tc                                                   Created container coredns\nkube-system                         5m13s       Normal    Started                    pod/coredns-6955765f44-v49tc                                                   Started container coredns\nkube-system                         6m9s        Normal    SuccessfulCreate           replicaset/coredns-6955765f44                                                  Created pod: coredns-6955765f44-v49tc\nkube-system                         6m9s        Normal    SuccessfulCreate           replicaset/coredns-6955765f44                                                  Created pod: coredns-6955765f44-mxkvk\nkube-system                         6m9s        Normal    ScalingReplicaSet          deployment/coredns                                                             Scaled up replica set coredns-6955765f44 to 2\nkube-system                         5m51s       Normal    Scheduled                  pod/kindnet-krxhw                                                              Successfully assigned kube-system/kindnet-krxhw to kind-worker\nkube-system                         5m50s       Normal    Pulling                    pod/kindnet-krxhw                                                              Pulling image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m46s       Normal    Pulled                     pod/kindnet-krxhw                                                              Successfully pulled image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m45s       Normal    Created                    pod/kindnet-krxhw                                                              Created container kindnet-cni\nkube-system                         5m45s       Normal    Started                    pod/kindnet-krxhw                                                              Started container kindnet-cni\nkube-system                         6m9s        Normal    Scheduled                  pod/kindnet-lnv5z                                                              Successfully assigned kube-system/kindnet-lnv5z to kind-control-plane\nkube-system                         6m8s        Normal    Pulling                    pod/kindnet-lnv5z                                                              Pulling image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         6m6s        Normal    Pulled                     pod/kindnet-lnv5z                                                              Successfully pulled image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         6m5s        Normal    Created                    pod/kindnet-lnv5z                                                              Created container kindnet-cni\nkube-system                         6m5s        Normal    Started                    pod/kindnet-lnv5z                                                              Started container kindnet-cni\nkube-system                         5m50s       Normal    Scheduled                  pod/kindnet-rmvhf                                                              Successfully assigned kube-system/kindnet-rmvhf to kind-worker2\nkube-system                         5m49s       Normal    Pulling                    pod/kindnet-rmvhf                                                              Pulling image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m45s       Normal    Pulled                     pod/kindnet-rmvhf                                                              Successfully pulled image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m45s       Normal    Created                    pod/kindnet-rmvhf                                                              Created container kindnet-cni\nkube-system                         5m45s       Normal    Started                    pod/kindnet-rmvhf                                                              Started container kindnet-cni\nkube-system                         6m9s        Normal    SuccessfulCreate           daemonset/kindnet                                                              Created pod: kindnet-lnv5z\nkube-system                         5m51s       Normal    SuccessfulCreate           daemonset/kindnet                                                              Created pod: kindnet-krxhw\nkube-system                         5m50s       Normal    SuccessfulCreate           daemonset/kindnet                                                              Created pod: kindnet-rmvhf\nkube-system                         6m25s       Normal    LeaderElection             endpoints/kube-controller-manager                                              kind-control-plane_cb68fe76-a826-4e72-81b7-ec45e90de04d became leader\nkube-system                         6m25s       Normal    LeaderElection             lease/kube-controller-manager                                                  kind-control-plane_cb68fe76-a826-4e72-81b7-ec45e90de04d became leader\nkube-system                         5m51s       Normal    Scheduled                  pod/kube-proxy-m22kv                                                           Successfully assigned kube-system/kube-proxy-m22kv to kind-worker\nkube-system                         5m50s       Normal    Pulled                     pod/kube-proxy-m22kv                                                           Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\" already present on machine\nkube-system                         5m48s       Normal    Created                    pod/kube-proxy-m22kv                                                           Created container kube-proxy\nkube-system                         5m48s       Normal    Started                    pod/kube-proxy-m22kv                                                           Started container kube-proxy\nkube-system                         5m50s       Normal    Scheduled                  pod/kube-proxy-v8fsf                                                           Successfully assigned kube-system/kube-proxy-v8fsf to kind-worker2\nkube-system                         5m49s       Normal    Pulled                     pod/kube-proxy-v8fsf                                                           Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\" already present on machine\nkube-system                         5m47s       Normal    Created                    pod/kube-proxy-v8fsf                                                           Created container kube-proxy\nkube-system                         5m47s       Normal    Started                    pod/kube-proxy-v8fsf                                                           Started container kube-proxy\nkube-system                         6m9s        Normal    Scheduled                  pod/kube-proxy-vjhtv                                                           Successfully assigned kube-system/kube-proxy-vjhtv to kind-control-plane\nkube-system                         6m8s        Normal    Pulled                     pod/kube-proxy-vjhtv                                                           Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\" already present on machine\nkube-system                         6m7s        Normal    Created                    pod/kube-proxy-vjhtv                                                           Created container kube-proxy\nkube-system                         6m7s        Normal    Started                    pod/kube-proxy-vjhtv                                                           Started container kube-proxy\nkube-system                         6m9s        Normal    SuccessfulCreate           daemonset/kube-proxy                                                           Created pod: kube-proxy-vjhtv\nkube-system                         5m51s       Normal    SuccessfulCreate           daemonset/kube-proxy                                                           Created pod: kube-proxy-m22kv\nkube-system                         5m50s       Normal    SuccessfulCreate           daemonset/kube-proxy                                                           Created pod: kube-proxy-v8fsf\nkube-system                         6m26s       Normal    LeaderElection             endpoints/kube-scheduler                                                       kind-control-plane_124e1804-3c2f-4bce-88f9-cec3a614addb became leader\nkube-system                         6m26s       Normal    LeaderElection             lease/kube-scheduler                                                           kind-control-plane_124e1804-3c2f-4bce-88f9-cec3a614addb became leader\nkubectl-1391                        16s         Normal    Scheduled                  pod/httpd                                                                      Successfully assigned kubectl-1391/httpd to kind-worker\nkubectl-1391                        14s         Normal    Pulled                     pod/httpd                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-1391                        14s         Normal    Created                    pod/httpd                                                                      Created container httpd\nkubectl-1391                        14s         Normal    Started                    pod/httpd                                                                      Started container httpd\nkubectl-6642                        <unknown>                                                                                                                       some data here\nkubectl-6642                        1s          Normal    Scheduled                  pod/rc1lwl4k8hvr9-bv9mh                                                        Successfully assigned kubectl-6642/rc1lwl4k8hvr9-bv9mh to kind-worker2\nkubectl-6642                        1s          Normal    SuccessfulCreate           replicationcontroller/rc1lwl4k8hvr9                                            Created pod: rc1lwl4k8hvr9-bv9mh\nkubectl-75                          11s         Normal    Scheduled                  pod/pause                                                                      Successfully assigned kubectl-75/pause to kind-worker\nkubectl-75                          9s          Normal    Pulled                     pod/pause                                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubectl-75                          9s          Normal    Created                    pod/pause                                                                      Created container pause\nkubectl-75                          9s          Normal    Started                    pod/pause                                                                      Started container pause\nnettest-3642                        18s         Normal    Scheduled                  pod/netserver-0                                                                Successfully assigned nettest-3642/netserver-0 to kind-worker\nnettest-3642                        18s         Normal    Pulled                     pod/netserver-0                                                                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-3642                        18s         Normal    Created                    pod/netserver-0                                                                Created container webserver\nnettest-3642                        17s         Normal    Started                    pod/netserver-0                                                                Started container webserver\nnettest-3642                        18s         Normal    Scheduled                  pod/netserver-1                                                                Successfully assigned nettest-3642/netserver-1 to kind-worker2\nnettest-3642                        18s         Normal    Pulled                     pod/netserver-1                                                                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-3642                        18s         Normal    Created                    pod/netserver-1                                                                Created container webserver\nnettest-3642                        17s         Normal    Started                    pod/netserver-1                                                                Started container webserver\npersistent-local-volumes-test-731   47s         Normal    Pulled                     pod/hostexec-kind-worker-f7r7r                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-731   47s         Normal    Created                    pod/hostexec-kind-worker-f7r7r                                                 Created container agnhost\npersistent-local-volumes-test-731   47s         Normal    Started                    pod/hostexec-kind-worker-f7r7r                                                 Started container agnhost\npersistent-local-volumes-test-731   35s         Normal    Scheduled                  pod/security-context-94395637-a16f-476c-819f-9bc01c5f2da1                      Successfully assigned persistent-local-volumes-test-731/security-context-94395637-a16f-476c-819f-9bc01c5f2da1 to kind-worker\npersistent-local-volumes-test-731   32s         Normal    SuccessfulMountVolume      pod/security-context-94395637-a16f-476c-819f-9bc01c5f2da1                      MapVolume.MapPodDevice succeeded for volume \"local-pvbwxjw\" globalMapPath \"/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvbwxjw\"\npersistent-local-volumes-test-731   32s         Normal    SuccessfulMountVolume      pod/security-context-94395637-a16f-476c-819f-9bc01c5f2da1                      MapVolume.MapPodDevice succeeded for volume \"local-pvbwxjw\" volumeMapPath \"/var/lib/kubelet/pods/e8e73b3e-26cf-4609-86d4-643fdd5769a2/volumeDevices/kubernetes.io~local-volume\"\npersistent-local-volumes-test-731   32s         Normal    Pulled                     pod/security-context-94395637-a16f-476c-819f-9bc01c5f2da1                      Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-731   32s         Normal    Created                    pod/security-context-94395637-a16f-476c-819f-9bc01c5f2da1                      Created container write-pod\npersistent-local-volumes-test-731   32s         Normal    Started                    pod/security-context-94395637-a16f-476c-819f-9bc01c5f2da1                      Started container write-pod\npersistent-local-volumes-test-737   9s          Normal    Pulled                     pod/hostexec-kind-worker-bdcsn                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-737   9s          Normal    Created                    pod/hostexec-kind-worker-bdcsn                                                 Created container agnhost\npersistent-local-volumes-test-737   8s          Normal    Started                    pod/hostexec-kind-worker-bdcsn                                                 Started container agnhost\nprestop-2981                        2s          Normal    Scheduled                  pod/server                                                                     Successfully assigned prestop-2981/server to kind-worker\nprestop-2981                        1s          Normal    Pulled                     pod/server                                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprestop-2981                        1s          Normal    Created                    pod/server                                                                     Created container server\nprovisioning-3445                   50s         Normal    Pulled                     pod/hostpath-symlink-prep-provisioning-3445                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3445                   49s         Normal    Created                    pod/hostpath-symlink-prep-provisioning-3445                                    Created container init-volume-provisioning-3445\nprovisioning-3445                   49s         Normal    Started                    pod/hostpath-symlink-prep-provisioning-3445                                    Started container init-volume-provisioning-3445\nprovisioning-3445                   19s         Normal    Pulled                     pod/hostpath-symlink-prep-provisioning-3445                                    Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3445                   19s         Normal    Created                    pod/hostpath-symlink-prep-provisioning-3445                                    Created container init-volume-provisioning-3445\nprovisioning-3445                   19s         Normal    Started                    pod/hostpath-symlink-prep-provisioning-3445                                    Started container init-volume-provisioning-3445\nprovisioning-3445                   37s         Normal    Pulled                     pod/pod-subpath-test-hostpathsymlink-zxdp                                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nprovisioning-3445                   37s         Normal    Created                    pod/pod-subpath-test-hostpathsymlink-zxdp                                      Created container init-volume-hostpathsymlink-zxdp\nprovisioning-3445                   37s         Normal    Started                    pod/pod-subpath-test-hostpathsymlink-zxdp                                      Started container init-volume-hostpathsymlink-zxdp\nprovisioning-3445                   36s         Normal    Pulled                     pod/pod-subpath-test-hostpathsymlink-zxdp                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3445                   35s         Normal    Created                    pod/pod-subpath-test-hostpathsymlink-zxdp                                      Created container test-init-volume-hostpathsymlink-zxdp\nprovisioning-3445                   35s         Normal    Started                    pod/pod-subpath-test-hostpathsymlink-zxdp                                      Started container test-init-volume-hostpathsymlink-zxdp\nprovisioning-3445                   35s         Normal    Pulled                     pod/pod-subpath-test-hostpathsymlink-zxdp                                      Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-3445                   34s         Normal    Created                    pod/pod-subpath-test-hostpathsymlink-zxdp                                      Created container test-container-subpath-hostpathsymlink-zxdp\nprovisioning-3445                   34s         Normal    Started                    pod/pod-subpath-test-hostpathsymlink-zxdp                                      Started container test-container-subpath-hostpathsymlink-zxdp\nprovisioning-4446                   10s         Normal    Pulled                     pod/pod-subpath-test-hostpath-njdp                                             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4446                   10s         Normal    Created                    pod/pod-subpath-test-hostpath-njdp                                             Created container test-init-subpath-hostpath-njdp\nprovisioning-4446                   10s         Normal    Started                    pod/pod-subpath-test-hostpath-njdp                                             Started container test-init-subpath-hostpath-njdp\nprovisioning-4446                   10s         Normal    Pulled                     pod/pod-subpath-test-hostpath-njdp                                             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4446                   9s          Normal    Created                    pod/pod-subpath-test-hostpath-njdp                                             Created container test-container-subpath-hostpath-njdp\nprovisioning-4446                   9s          Normal    Started                    pod/pod-subpath-test-hostpath-njdp                                             Started container test-container-subpath-hostpath-njdp\nprovisioning-4446                   9s          Normal    Pulled                     pod/pod-subpath-test-hostpath-njdp                                             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4446                   9s          Normal    Created                    pod/pod-subpath-test-hostpath-njdp                                             Created container test-container-volume-hostpath-njdp\nprovisioning-4446                   9s          Normal    Started                    pod/pod-subpath-test-hostpath-njdp                                             Started container test-container-volume-hostpath-njdp\nprovisioning-4787                   5s          Normal    Pulled                     pod/hostexec-kind-worker-tjwzx                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4787                   5s          Normal    Created                    pod/hostexec-kind-worker-tjwzx                                                 Created container agnhost\nprovisioning-4787                   4s          Normal    Started                    pod/hostexec-kind-worker-tjwzx                                                 Started container agnhost\nreplicaset-7023                     11s         Normal    Scheduled                  pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Successfully assigned replicaset-7023/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds to kind-worker2\nreplicaset-7023                     10s         Normal    Pulled                     pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nreplicaset-7023                     10s         Normal    Created                    pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Created container my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751\nreplicaset-7023                     10s         Normal    Started                    pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Started container my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751\nreplicaset-7023                     11s         Normal    SuccessfulCreate           replicaset/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751              Created pod: my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds\nreplication-controller-1172         20s         Normal    Scheduled                  pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Successfully assigned replication-controller-1172/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg to kind-worker\nreplication-controller-1172         19s         Normal    Pulled                     pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nreplication-controller-1172         19s         Normal    Created                    pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Created container my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f\nreplication-controller-1172         19s         Normal    Started                    pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Started container my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f\nreplication-controller-1172         20s         Normal    SuccessfulCreate           replicationcontroller/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f   Created pod: my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg\nresourcequota-777                   2s          Normal    Scheduled                  pod/pfpod                                                                      Successfully assigned resourcequota-777/pfpod to kind-worker\nresourcequota-777                   1s          Normal    Pulled                     pod/pfpod                                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nresourcequota-777                   1s          Normal    Created                    pod/pfpod                                                                      Created container pause\nsched-preemption-path-3542          11s         Warning   FailedScheduling           pod/rs-pod1-79qbp                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          8s          Normal    Scheduled                  pod/rs-pod1-79qbp                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-79qbp to kind-worker2\nsched-preemption-path-3542          11s         Warning   FailedScheduling           pod/rs-pod1-fjgjk                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          8s          Normal    Scheduled                  pod/rs-pod1-fjgjk                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-fjgjk to kind-worker2\nsched-preemption-path-3542          11s         Warning   FailedScheduling           pod/rs-pod1-pw7gf                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          8s          Normal    Scheduled                  pod/rs-pod1-pw7gf                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-pw7gf to kind-worker2\nsched-preemption-path-3542          11s         Warning   FailedScheduling           pod/rs-pod1-skpmm                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          8s          Normal    Scheduled                  pod/rs-pod1-skpmm                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-skpmm to kind-worker2\nsched-preemption-path-3542          11s         Warning   FailedScheduling           pod/rs-pod1-vs9mm                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          8s          Normal    Scheduled                  pod/rs-pod1-vs9mm                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-vs9mm to kind-worker2\nsched-preemption-path-3542          12s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-skpmm\nsched-preemption-path-3542          12s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-79qbp\nsched-preemption-path-3542          12s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-vs9mm\nsched-preemption-path-3542          12s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-pw7gf\nsched-preemption-path-3542          12s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-fjgjk\nsched-preemption-path-3542          23s         Normal    Scheduled                  pod/without-label                                                              Successfully assigned sched-preemption-path-3542/without-label to kind-worker2\nsched-preemption-path-3542          21s         Normal    Pulled                     pod/without-label                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          21s         Normal    Created                    pod/without-label                                                              Created container without-label\nsched-preemption-path-3542          21s         Normal    Started                    pod/without-label                                                              Started container without-label\nsched-preemption-path-3542          12s         Normal    Killing                    pod/without-label                                                              Stopping container without-label\nservices-3872                       6s          Normal    Scheduled                  pod/hairpin                                                                    Successfully assigned services-3872/hairpin to kind-worker2\nstatefulset-1303                    16s         Normal    ProvisioningSucceeded      persistentvolumeclaim/datadir-ss-0                                             Successfully provisioned volume pvc-2cd79c50-72b6-40f7-8ccd-31723c5c915d using kubernetes.io/host-path\nstatefulset-1303                    16s         Warning   FailedScheduling           pod/ss-0                                                                       error while running \"VolumeBinding\" filter plugin for pod \"ss-0\": pod has unbound immediate PersistentVolumeClaims\nstatefulset-1303                    15s         Normal    Scheduled                  pod/ss-0                                                                       Successfully assigned statefulset-1303/ss-0 to kind-worker\nstatefulset-1303                    13s         Normal    Pulled                     pod/ss-0                                                                       Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1303                    13s         Normal    Created                    pod/ss-0                                                                       Created container webserver\nstatefulset-1303                    12s         Normal    Started                    pod/ss-0                                                                       Started container webserver\nstatefulset-1303                    1s          Warning   Unhealthy                  pod/ss-0                                                                       Readiness probe failed:\nstatefulset-1303                    16s         Normal    SuccessfulCreate           statefulset/ss                                                                 create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-1303                    16s         Normal    SuccessfulCreate           statefulset/ss                                                                 create Pod ss-0 in StatefulSet ss successful\nstatefulset-2689                    7s          Normal    Scheduled                  pod/ss2-0                                                                      Successfully assigned statefulset-2689/ss2-0 to kind-worker2\nstatefulset-2689                    7s          Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-4148                    18s         Warning   PodFitsHostPorts           pod/ss-0                                                                       Predicate PodFitsHostPorts failed\nstatefulset-4148                    5s          Normal    SuccessfulCreate           statefulset/ss                                                                 create Pod ss-0 in StatefulSet ss successful\nstatefulset-4148                    5s          Warning   RecreatingFailedPod        statefulset/ss                                                                 StatefulSet statefulset-4148/ss is recreating failed Pod ss-0\nstatefulset-4148                    5s          Normal    SuccessfulDelete           statefulset/ss                                                                 delete Pod ss-0 in StatefulSet ss successful\nstatefulset-4148                    17s         Normal    Pulled                     pod/test-pod                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4148                    17s         Normal    Created                    pod/test-pod                                                                   Created container webserver\nstatefulset-4148                    17s         Normal    Started                    pod/test-pod                                                                   Started container webserver\nstatefulset-8098                    4m23s       Normal    Scheduled                  pod/ss2-0                                                                      Successfully assigned statefulset-8098/ss2-0 to kind-worker2\nstatefulset-8098                    4m22s       Normal    Pulled                     pod/ss2-0                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    4m22s       Normal    Created                    pod/ss2-0                                                                      Created container webserver\nstatefulset-8098                    4m22s       Normal    Started                    pod/ss2-0                                                                      Started container webserver\nstatefulset-8098                    2m2s        Normal    Killing                    pod/ss2-0                                                                      Stopping container webserver\nstatefulset-8098                    108s        Normal    Scheduled                  pod/ss2-0                                                                      Successfully assigned statefulset-8098/ss2-0 to kind-worker2\nstatefulset-8098                    107s        Normal    Pulled                     pod/ss2-0                                                                      Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-8098                    107s        Normal    Created                    pod/ss2-0                                                                      Created container webserver\nstatefulset-8098                    107s        Normal    Started                    pod/ss2-0                                                                      Started container webserver\nstatefulset-8098                    4m12s       Normal    Scheduled                  pod/ss2-1                                                                      Successfully assigned statefulset-8098/ss2-1 to kind-worker\nstatefulset-8098                    4m11s       Normal    Pulled                     pod/ss2-1                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    4m11s       Normal    Created                    pod/ss2-1                                                                      Created container webserver\nstatefulset-8098                    4m10s       Normal    Started                    pod/ss2-1                                                                      Started container webserver\nstatefulset-8098                    3m22s       Warning   Unhealthy                  pod/ss2-1                                                                      Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-8098                    2m22s       Normal    Scheduled                  pod/ss2-1                                                                      Successfully assigned statefulset-8098/ss2-1 to kind-worker2\nstatefulset-8098                    2m21s       Normal    Pulling                    pod/ss2-1                                                                      Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m15s       Normal    Pulled                     pod/ss2-1                                                                      Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m15s       Normal    Created                    pod/ss2-1                                                                      Created container webserver\nstatefulset-8098                    2m14s       Normal    Started                    pod/ss2-1                                                                      Started container webserver\nstatefulset-8098                    72s         Warning   Unhealthy                  pod/ss2-1                                                                      Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-8098                    20s         Normal    Scheduled                  pod/ss2-1                                                                      Successfully assigned statefulset-8098/ss2-1 to kind-worker2\nstatefulset-8098                    19s         Normal    Pulled                     pod/ss2-1                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    19s         Normal    Created                    pod/ss2-1                                                                      Created container webserver\nstatefulset-8098                    19s         Normal    Started                    pod/ss2-1                                                                      Started container webserver\nstatefulset-8098                    3m58s       Normal    Scheduled                  pod/ss2-2                                                                      Successfully assigned statefulset-8098/ss2-2 to kind-worker2\nstatefulset-8098                    3m57s       Normal    Pulled                     pod/ss2-2                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    3m57s       Normal    Created                    pod/ss2-2                                                                      Created container webserver\nstatefulset-8098                    3m57s       Normal    Started                    pod/ss2-2                                                                      Started container webserver\nstatefulset-8098                    3m4s        Normal    Killing                    pod/ss2-2                                                                      Stopping container webserver\nstatefulset-8098                    3m4s        Warning   Unhealthy                  pod/ss2-2                                                                      Readiness probe failed: Get http://10.244.2.48:80/index.html: dial tcp 10.244.2.48:80: connect: connection refused\nstatefulset-8098                    2m50s       Normal    Scheduled                  pod/ss2-2                                                                      Successfully assigned statefulset-8098/ss2-2 to kind-worker\nstatefulset-8098                    2m48s       Normal    Pulling                    pod/ss2-2                                                                      Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m39s       Normal    Pulled                     pod/ss2-2                                                                      Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m39s       Normal    Created                    pod/ss2-2                                                                      Created container webserver\nstatefulset-8098                    2m39s       Normal    Started                    pod/ss2-2                                                                      Started container webserver\nstatefulset-8098                    49s         Normal    Killing                    pod/ss2-2                                                                      Stopping container webserver\nstatefulset-8098                    47s         Warning   FailedKillPod              pod/ss2-2                                                                      error killing pod: failed to \"KillPodSandbox\" for \"cf2c451d-dc4e-4037-8c81-c876fc569d4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"290e702a271bec1d19be7fb320b704c67b95a195f150d30e9c683d58279e7d63\\\": could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-1afcf3c2c7d30736b8246 --wait]: exit status 1: iptables: No chain/target/match by that name.\\n\"\nstatefulset-8098                    36s         Normal    Scheduled                  pod/ss2-2                                                                      Successfully assigned statefulset-8098/ss2-2 to kind-worker\nstatefulset-8098                    34s         Normal    Pulled                     pod/ss2-2                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    34s         Normal    Created                    pod/ss2-2                                                                      Created container webserver\nstatefulset-8098                    33s         Normal    Started                    pod/ss2-2                                                                      Started container webserver\nstatefulset-8098                    108s        Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-8098                    20s         Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-8098                    36s         Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-8098                    49s         Normal    SuccessfulDelete           statefulset/ss2                                                                delete Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-8098                    31s         Normal    SuccessfulDelete           statefulset/ss2                                                                delete Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-8098                    4s          Normal    SuccessfulDelete           statefulset/ss2                                                                delete Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-8098                    3m58s       Warning   FailedToUpdateEndpoint     endpoints/test                                                                 Failed to update endpoint statefulset-8098/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nvolume-1413                         70s         Normal    Pulled                     pod/hostexec-kind-worker-s6t4k                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-1413                         70s         Normal    Created                    pod/hostexec-kind-worker-s6t4k                                                 Created container agnhost\nvolume-1413                         70s         Normal    Started                    pod/hostexec-kind-worker-s6t4k                                                 Started container agnhost\nvolume-1413                         15s         Normal    Pulled                     pod/local-client                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1413                         15s         Normal    Created                    pod/local-client                                                               Created container local-client\nvolume-1413                         14s         Normal    Started                    pod/local-client                                                               Started container local-client\nvolume-1413                         1s          Normal    Killing                    pod/local-client                                                               Stopping container local-client\nvolume-1413                         52s         Normal    Pulled                     pod/local-injector                                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1413                         52s         Normal    Created                    pod/local-injector                                                             Created container local-injector\nvolume-1413                         52s         Normal    Started                    pod/local-injector                                                             Started container local-injector\nvolume-1413                         42s         Normal    Killing                    pod/local-injector                                                             Stopping container local-injector\nvolume-1413                         59s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-snhrz                                                storageclass.storage.k8s.io \"volume-1413\" not found\nvolume-865                          82s         Normal    Pulled                     pod/hostexec-kind-worker-znc98                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-865                          82s         Normal    Created                    pod/hostexec-kind-worker-znc98                                                 Created container agnhost\nvolume-865                          82s         Normal    Started                    pod/hostexec-kind-worker-znc98                                                 Started container agnhost\nvolume-865                          36s         Normal    Pulled                     pod/local-client                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-865                          36s         Normal    Created                    pod/local-client                                                               Created container local-client\nvolume-865                          36s         Normal    Started                    pod/local-client                                                               Started container local-client\nvolume-865                          19s         Normal    Killing                    pod/local-client                                                               Stopping container local-client\nvolume-865                          63s         Normal    Pulled                     pod/local-injector                                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-865                          63s         Normal    Created                    pod/local-injector                                                             Created container local-injector\nvolume-865                          63s         Normal    Started                    pod/local-injector                                                             Started container local-injector\nvolume-865                          52s         Normal    Killing                    pod/local-injector                                                             Stopping container local-injector\nvolume-865                          71s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-5rcs2                                                storageclass.storage.k8s.io \"volume-865\" not found\nvolumemode-3978                     27s         Normal    Pulled                     pod/hostexec-kind-worker-trf9m                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-3978                     27s         Normal    Created                    pod/hostexec-kind-worker-trf9m                                                 Created container agnhost\nvolumemode-3978                     27s         Normal    Started                    pod/hostexec-kind-worker-trf9m                                                 Started container agnhost\nvolumemode-3978                     14s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-c4vzx                                                storageclass.storage.k8s.io \"volumemode-3978\" not found\nvolumemode-3978                     8s          Normal    Scheduled                  pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Successfully assigned volumemode-3978/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e to kind-worker\nvolumemode-3978                     8s          Normal    Pulled                     pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-3978                     8s          Normal    Created                    pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Created container write-pod\nvolumemode-3978                     7s          Normal    Started                    pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Started container write-pod\nvolumemode-7769                     40s         Normal    Pulled                     pod/hostexec-kind-worker-cn5xr                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-7769                     40s         Normal    Created                    pod/hostexec-kind-worker-cn5xr                                                 Created container agnhost\nvolumemode-7769                     39s         Normal    Started                    pod/hostexec-kind-worker-cn5xr                                                 Started container agnhost\nvolumemode-7769                     21s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-7h75l                                                storageclass.storage.k8s.io \"volumemode-7769\" not found\nvolumemode-7769                     7s          Normal    Scheduled                  pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Successfully assigned volumemode-7769/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14 to kind-worker\nvolumemode-7769                     6s          Normal    Pulled                     pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-7769                     6s          Normal    Created                    pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Created container write-pod\nvolumemode-7769                     6s          Normal    Started                    pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Started container write-pod\n"
Nov 22 03:30:21.464: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get persistentvolumes --all-namespaces'
Nov 22 03:30:21.595: INFO: stderr: ""
Nov 22 03:30:21.595: INFO: stdout: "NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                             STORAGECLASS               REASON   AGE\nlocal-99lx2                                2Gi        RWO            Retain           Bound       volumemode-7769/pvc-7h75l         volumemode-7769                     21s\nlocal-d62gl                                2Gi        RWO            Retain           Bound       volumemode-3978/pvc-c4vzx         volumemode-3978                     14s\nlocal-wkhmv                                2Gi        RWO            Retain           Bound       volume-1413/pvc-snhrz             volume-1413                         59s\npv1namelwl4k8hvr9                          3M         RWO            Retain           Available                                                                         0s\npvc-2cd79c50-72b6-40f7-8ccd-31723c5c915d   1          RWO            Delete           Bound       statefulset-1303/datadir-ss-0     standard                            16s\npvc-35d7b36d-30c9-4d18-ada1-7015f17fb77d   1Gi        RWO            Delete           Bound       csi-mock-volumes-9179/pvc-hsvjq   csi-mock-volumes-9179-sc            36s\n"
Nov 22 03:30:21.621: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get pods --all-namespaces'
Nov 22 03:30:21.768: INFO: stderr: ""
Nov 22 03:30:21.768: INFO: stdout: "NAMESPACE                           NAME                                                           READY   STATUS              RESTARTS   AGE\ncontainer-probe-146                 test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59            1/1     Running             0          3m19s\ncontainer-probe-153                 liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                  1/1     Running             0          3m38s\ncsi-mock-volumes-9179               csi-mockplugin-0                                               3/3     Running             0          40s\ncsi-mock-volumes-9179               csi-mockplugin-attacher-0                                      1/1     Running             0          40s\ncsi-mock-volumes-9179               pvc-volume-tester-9s8ks                                        1/1     Running             0          34s\ndeployment-649                      test-rolling-update-controller-2cl67                           0/1     Pending             0          1s\ndisruption-7756                     pod-0                                                          1/1     Terminating         0          40s\ndisruption-7756                     pod-1                                                          1/1     Terminating         0          39s\ndisruption-7756                     pod-2                                                          1/1     Terminating         0          39s\ndisruption-9889                     pod-0                                                          1/1     Terminating         0          63s\ndisruption-9889                     pod-1                                                          0/1     Terminating         0          63s\ndisruption-9889                     pod-2                                                          1/1     Terminating         0          63s\ne2e-kubelet-etc-hosts-2912          test-host-network-pod                                          0/2     Pending             0          3s\ne2e-kubelet-etc-hosts-2912          test-pod                                                       3/3     Running             0          13s\ne2e-privileged-pod-744              privileged-pod                                                 2/2     Terminating         0          26s\nemptydir-3786                       pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                       0/1     Pending             0          5s\nephemeral-6689                      csi-hostpath-attacher-0                                        0/1     ContainerCreating   0          17s\nephemeral-6689                      csi-hostpath-provisioner-0                                     0/1     ContainerCreating   0          17s\nephemeral-6689                      csi-hostpath-resizer-0                                         0/1     ContainerCreating   0          17s\nephemeral-6689                      csi-hostpathplugin-0                                           0/3     ContainerCreating   0          17s\nephemeral-6689                      csi-snapshotter-0                                              0/1     ContainerCreating   0          17s\nephemeral-6689                      inline-volume-tester-2s74d                                     0/1     ContainerCreating   0          17s\nevents-7362                         send-events-39b4dd4b-336d-4177-aef9-c3e2308d8169               1/1     Terminating         0          47s\njob-5355                            adopt-release-7k8xm                                            1/1     Terminating         0          46s\njob-5355                            adopt-release-gbm6x                                            1/1     Terminating         0          46s\njob-5355                            adopt-release-khnlb                                            1/1     Terminating         0          25s\nkube-system                         coredns-6955765f44-mxkvk                                       1/1     Running             0          6m9s\nkube-system                         coredns-6955765f44-v49tc                                       1/1     Running             0          6m9s\nkube-system                         etcd-kind-control-plane                                        1/1     Running             0          6m24s\nkube-system                         kindnet-krxhw                                                  1/1     Running             0          5m51s\nkube-system                         kindnet-lnv5z                                                  1/1     Running             0          6m9s\nkube-system                         kindnet-rmvhf                                                  1/1     Running             0          5m50s\nkube-system                         kube-apiserver-kind-control-plane                              1/1     Running             0          6m24s\nkube-system                         kube-controller-manager-kind-control-plane                     1/1     Running             0          6m24s\nkube-system                         kube-proxy-m22kv                                               1/1     Running             0          5m51s\nkube-system                         kube-proxy-v8fsf                                               1/1     Running             0          5m50s\nkube-system                         kube-proxy-vjhtv                                               1/1     Running             0          6m9s\nkube-system                         kube-scheduler-kind-control-plane                              1/1     Running             0          6m24s\nkubectl-1391                        httpd                                                          1/1     Running             0          16s\nkubectl-6642                        pod1lwl4k8hvr9                                                 0/1     Pending             0          0s\nkubectl-6642                        rc1lwl4k8hvr9-bv9mh                                            0/1     Pending             0          1s\nnettest-3642                        netserver-0                                                    0/1     Running             0          18s\nnettest-3642                        netserver-1                                                    0/1     Running             0          18s\npersistent-local-volumes-test-737   hostexec-kind-worker-bdcsn                                     1/1     Running             0          9s\nprestop-2981                        server                                                         0/1     Pending             0          2s\nprovisioning-4787                   hostexec-kind-worker-tjwzx                                     0/1     Pending             0          5s\nreplicaset-7023                     my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds   0/1     Pending             0          11s\nreplication-controller-1172         my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg   1/1     Running             0          20s\nresourcequota-777                   pfpod                                                          0/1     Pending             0          2s\nsched-preemption-path-3542          rs-pod1-79qbp                                                  0/1     Pending             0          12s\nsched-preemption-path-3542          rs-pod1-fjgjk                                                  0/1     Pending             0          12s\nsched-preemption-path-3542          rs-pod1-pw7gf                                                  0/1     Pending             0          12s\nsched-preemption-path-3542          rs-pod1-skpmm                                                  0/1     Pending             0          12s\nsched-preemption-path-3542          rs-pod1-vs9mm                                                  0/1     Pending             0          12s\nservices-3872                       hairpin                                                        0/1     Pending             0          6s\nstatefulset-1303                    ss-0                                                           0/1     Running             0          16s\nstatefulset-2689                    ss2-0                                                          0/1     Pending             0          7s\nstatefulset-4148                    ss-0                                                           0/1     Pending             0          5s\nstatefulset-4148                    test-pod                                                       1/1     Running             0          18s\nstatefulset-8098                    ss2-0                                                          1/1     Terminating         0          108s\nstatefulset-8098                    ss2-1                                                          1/1     Running             0          20s\nstatefulset-8098                    ss2-2                                                          1/1     Running             0          36s\nsvcaccounts-2387                    pod-service-account-defaultsa-mountspec                        0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-defaultsa-nomountspec                      0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-mountsa                                    0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-mountsa-mountspec                          0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-mountsa-nomountspec                        0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-nomountsa                                  0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-nomountsa-mountspec                        0/1     Terminating         0          42s\nsvcaccounts-2387                    pod-service-account-nomountsa-nomountspec                      0/1     Terminating         0          42s\nvolume-1413                         hostexec-kind-worker-s6t4k                                     1/1     Running             0          72s\nvolume-1413                         local-client                                                   1/1     Terminating         0          16s\nvolumemode-3978                     hostexec-kind-worker-trf9m                                     1/1     Running             0          27s\nvolumemode-3978                     security-context-fac0d0e3-1388-4379-8ed7-c144706f952e          0/1     ContainerCreating   0          8s\nvolumemode-7769                     hostexec-kind-worker-cn5xr                                     1/1     Running             0          41s\nvolumemode-7769                     security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14          0/1     ContainerCreating   0          7s\n"
... skipping 29 lines ...
Nov 22 03:30:23.425: INFO: stdout: "NAMESPACE      NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE\nkube-system    kindnet         3         3         3       3            3           <none>                        6m25s\nkube-system    kube-proxy      3         3         3       3            3           beta.kubernetes.io/os=linux   6m27s\nkubectl-6642   ds6lwl4k8hvr9   2         2         0       2            0           <none>                        0s\n"
Nov 22 03:30:23.457: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get statefulsets --all-namespaces'
Nov 22 03:30:23.701: INFO: stderr: ""
Nov 22 03:30:23.701: INFO: stdout: "NAMESPACE               NAME                       READY   AGE\ncsi-mock-volumes-9179   csi-mockplugin             1/1     42s\ncsi-mock-volumes-9179   csi-mockplugin-attacher    1/1     42s\nephemeral-6689          csi-hostpath-attacher      1/1     19s\nephemeral-6689          csi-hostpath-provisioner   0/1     19s\nephemeral-6689          csi-hostpath-resizer       0/1     19s\nephemeral-6689          csi-hostpathplugin         0/1     19s\nephemeral-6689          csi-snapshotter            0/1     19s\nkubectl-6642            ss3lwl4k8hvr9              0/1     0s\nstatefulset-1303        ss                         0/1     18s\nstatefulset-2689        ss2                        0/3     9s\nstatefulset-4148        ss                         0/1     20s\nstatefulset-8098        ss2                        3/3     4m25s\n"
Nov 22 03:30:23.811: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get events --all-namespaces'
Nov 22 03:30:24.032: INFO: stderr: ""
Nov 22 03:30:24.032: INFO: stdout: "NAMESPACE                           LAST SEEN   TYPE      REASON                     OBJECT                                                                         MESSAGE\ncontainer-probe-146                 3m21s       Normal    Scheduled                  pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Successfully assigned container-probe-146/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59 to kind-worker2\ncontainer-probe-146                 3m20s       Normal    Pulled                     pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Container image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\" already present on machine\ncontainer-probe-146                 3m20s       Normal    Created                    pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Created container test-webserver\ncontainer-probe-146                 3m19s       Normal    Started                    pod/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59                        Started container test-webserver\ncontainer-probe-153                 3m40s       Normal    Scheduled                  pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Successfully assigned container-probe-153/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5 to kind-worker\ncontainer-probe-153                 3m40s       Normal    Pulled                     pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ncontainer-probe-153                 3m40s       Normal    Created                    pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Created container liveness\ncontainer-probe-153                 3m39s       Normal    Started                    pod/liveness-a53bb86f-77fe-4914-9dd3-67b2d9bdd5c5                              Started container liveness\ncsi-mock-volumes-9179               41s         Normal    Pulling                    pod/csi-mockplugin-0                                                           Pulling image \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\ncsi-mock-volumes-9179               41s         Normal    Pulled                     pod/csi-mockplugin-0                                                           Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\ncsi-mock-volumes-9179               41s         Normal    Created                    pod/csi-mockplugin-0                                                           Created container csi-provisioner\ncsi-mock-volumes-9179               40s         Normal    Started                    pod/csi-mockplugin-0                                                           Started container csi-provisioner\ncsi-mock-volumes-9179               40s         Normal    Pulling                    pod/csi-mockplugin-0                                                           Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\ncsi-mock-volumes-9179               40s         Normal    Pulled                     pod/csi-mockplugin-0                                                           Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\ncsi-mock-volumes-9179               39s         Normal    Created                    pod/csi-mockplugin-0                                                           Created container driver-registrar\ncsi-mock-volumes-9179               39s         Normal    Started                    pod/csi-mockplugin-0                                                           Started container driver-registrar\ncsi-mock-volumes-9179               39s         Normal    Pulled                     pod/csi-mockplugin-0                                                           Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-9179               39s         Normal    Created                    pod/csi-mockplugin-0                                                           Created container mock\ncsi-mock-volumes-9179               39s         Normal    Started                    pod/csi-mockplugin-0                                                           Started container mock\ncsi-mock-volumes-9179               41s         Normal    Pulling                    pod/csi-mockplugin-attacher-0                                                  Pulling image \"quay.io/k8scsi/csi-attacher:v1.1.0\"\ncsi-mock-volumes-9179               40s         Normal    Pulled                     pod/csi-mockplugin-attacher-0                                                  Successfully pulled image \"quay.io/k8scsi/csi-attacher:v1.1.0\"\ncsi-mock-volumes-9179               40s         Normal    Created                    pod/csi-mockplugin-attacher-0                                                  Created container csi-attacher\ncsi-mock-volumes-9179               40s         Normal    Started                    pod/csi-mockplugin-attacher-0                                                  Started container csi-attacher\ncsi-mock-volumes-9179               42s         Normal    SuccessfulCreate           statefulset/csi-mockplugin-attacher                                            create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-9179               42s         Normal    SuccessfulCreate           statefulset/csi-mockplugin                                                     create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-9179               40s         Normal    ExternalProvisioning       persistentvolumeclaim/pvc-hsvjq                                                waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-9179\" or manually created by system administrator\ncsi-mock-volumes-9179               38s         Normal    Provisioning               persistentvolumeclaim/pvc-hsvjq                                                External provisioner is provisioning volume for claim \"csi-mock-volumes-9179/pvc-hsvjq\"\ncsi-mock-volumes-9179               14s         Warning   ExternalExpanding          persistentvolumeclaim/pvc-hsvjq                                                Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-9179               36s         Normal    SuccessfulAttachVolume     pod/pvc-volume-tester-9s8ks                                                    AttachVolume.Attach succeeded for volume \"pvc-35d7b36d-30c9-4d18-ada1-7015f17fb77d\"\ncsi-mock-volumes-9179               29s         Normal    Pulled                     pod/pvc-volume-tester-9s8ks                                                    Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-9179               29s         Normal    Created                    pod/pvc-volume-tester-9s8ks                                                    Created container volume-tester\ncsi-mock-volumes-9179               29s         Normal    Started                    pod/pvc-volume-tester-9s8ks                                                    Started container volume-tester\ndefault                             6m27s       Normal    Starting                   node/kind-control-plane                                                        Starting kubelet.\ndefault                             6m27s       Warning   CheckLimitsForResolvConf   node/kind-control-plane                                                        Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\ndefault                             6m27s       Normal    NodeHasSufficientMemory    node/kind-control-plane                                                        Node kind-control-plane status is now: NodeHasSufficientMemory\ndefault                             6m27s       Normal    NodeHasNoDiskPressure      node/kind-control-plane                                                        Node kind-control-plane status is now: NodeHasNoDiskPressure\ndefault                             6m27s       Normal    NodeHasSufficientPID       node/kind-control-plane                                                        Node kind-control-plane status is now: NodeHasSufficientPID\ndefault                             6m27s       Normal    NodeAllocatableEnforced    node/kind-control-plane                                                        Updated Node Allocatable limit across pods\ndefault                             6m11s       Normal    RegisteredNode             node/kind-control-plane                                                        Node kind-control-plane event: Registered Node kind-control-plane in Controller\ndefault                             6m4s        Normal    Starting                   node/kind-control-plane                                                        Starting kube-proxy.\ndefault                             5m27s       Normal    NodeReady                  node/kind-control-plane                                                        Node kind-control-plane status is now: NodeReady\ndefault                             5m53s       Normal    NodeHasSufficientMemory    node/kind-worker                                                               Node kind-worker status is now: NodeHasSufficientMemory\ndefault                             5m51s       Normal    RegisteredNode             node/kind-worker                                                               Node kind-worker event: Registered Node kind-worker in Controller\ndefault                             5m45s       Normal    Starting                   node/kind-worker                                                               Starting kube-proxy.\ndefault                             5m52s       Normal    NodeHasSufficientPID       node/kind-worker2                                                              Node kind-worker2 status is now: NodeHasSufficientPID\ndefault                             5m51s       Normal    RegisteredNode             node/kind-worker2                                                              Node kind-worker2 event: Registered Node kind-worker2 in Controller\ndefault                             5m44s       Normal    Starting                   node/kind-worker2                                                              Starting kube-proxy.\ndeployment-649                      3s          Normal    Scheduled                  pod/test-rolling-update-controller-2cl67                                       Successfully assigned deployment-649/test-rolling-update-controller-2cl67 to kind-worker\ndeployment-649                      2s          Normal    Pulled                     pod/test-rolling-update-controller-2cl67                                       Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\ndeployment-649                      2s          Normal    Created                    pod/test-rolling-update-controller-2cl67                                       Created container httpd\ndeployment-649                      2s          Normal    Started                    pod/test-rolling-update-controller-2cl67                                       Started container httpd\ndeployment-649                      3s          Normal    SuccessfulCreate           replicaset/test-rolling-update-controller                                      Created pod: test-rolling-update-controller-2cl67\ne2e-kubelet-etc-hosts-2912          4s          Normal    Scheduled                  pod/test-host-network-pod                                                      Successfully assigned e2e-kubelet-etc-hosts-2912/test-host-network-pod to kind-worker2\ne2e-kubelet-etc-hosts-2912          15s         Normal    Scheduled                  pod/test-pod                                                                   Successfully assigned e2e-kubelet-etc-hosts-2912/test-pod to kind-worker\ne2e-kubelet-etc-hosts-2912          12s         Normal    Pulled                     pod/test-pod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-2912          12s         Normal    Created                    pod/test-pod                                                                   Created container busybox-1\ne2e-kubelet-etc-hosts-2912          12s         Normal    Started                    pod/test-pod                                                                   Started container busybox-1\ne2e-kubelet-etc-hosts-2912          12s         Normal    Pulled                     pod/test-pod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-2912          12s         Normal    Created                    pod/test-pod                                                                   Created container busybox-2\ne2e-kubelet-etc-hosts-2912          11s         Normal    Started                    pod/test-pod                                                                   Started container busybox-2\ne2e-kubelet-etc-hosts-2912          11s         Normal    Pulled                     pod/test-pod                                                                   Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\ne2e-kubelet-etc-hosts-2912          11s         Normal    Created                    pod/test-pod                                                                   Created container busybox-3\ne2e-kubelet-etc-hosts-2912          11s         Normal    Started                    pod/test-pod                                                                   Started container busybox-3\nemptydir-3786                       7s          Normal    Scheduled                  pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Successfully assigned emptydir-3786/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e to kind-worker\nemptydir-3786                       6s          Normal    Pulling                    pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Pulling image \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\nemptydir-3786                       5s          Normal    Pulled                     pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\nemptydir-3786                       5s          Normal    Created                    pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Created container test-container\nemptydir-3786                       5s          Normal    Started                    pod/pod-9ac383fe-bf63-44ae-a8cb-bba4c6a73c0e                                   Started container test-container\nephemeral-6689                      18s         Normal    Pulling                    pod/csi-hostpath-attacher-0                                                    Pulling image \"quay.io/k8scsi/csi-attacher:v2.0.0\"\nephemeral-6689                      15s         Normal    Pulled                     pod/csi-hostpath-attacher-0                                                    Successfully pulled image \"quay.io/k8scsi/csi-attacher:v2.0.0\"\nephemeral-6689                      15s         Normal    Created                    pod/csi-hostpath-attacher-0                                                    Created container csi-attacher\nephemeral-6689                      14s         Normal    Started                    pod/csi-hostpath-attacher-0                                                    Started container csi-attacher\nephemeral-6689                      19s         Normal    SuccessfulCreate           statefulset/csi-hostpath-attacher                                              create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-6689                      17s         Normal    Pulling                    pod/csi-hostpath-provisioner-0                                                 Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0-rc1\"\nephemeral-6689                      10s         Normal    Pulled                     pod/csi-hostpath-provisioner-0                                                 Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0-rc1\"\nephemeral-6689                      10s         Normal    Created                    pod/csi-hostpath-provisioner-0                                                 Created container csi-provisioner\nephemeral-6689                      10s         Normal    Started                    pod/csi-hostpath-provisioner-0                                                 Started container csi-provisioner\nephemeral-6689                      19s         Normal    SuccessfulCreate           statefulset/csi-hostpath-provisioner                                           create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-6689                      17s         Normal    Pulling                    pod/csi-hostpath-resizer-0                                                     Pulling image \"quay.io/k8scsi/csi-resizer:v0.3.0\"\nephemeral-6689                      19s         Normal    SuccessfulCreate           statefulset/csi-hostpath-resizer                                               create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-6689                      18s         Normal    Pulling                    pod/csi-hostpathplugin-0                                                       Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-6689                      13s         Normal    Pulled                     pod/csi-hostpathplugin-0                                                       Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\nephemeral-6689                      13s         Normal    Created                    pod/csi-hostpathplugin-0                                                       Created container node-driver-registrar\nephemeral-6689                      13s         Normal    Started                    pod/csi-hostpathplugin-0                                                       Started container node-driver-registrar\nephemeral-6689                      13s         Normal    Pulling                    pod/csi-hostpathplugin-0                                                       Pulling image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\nephemeral-6689                      19s         Normal    SuccessfulCreate           statefulset/csi-hostpathplugin                                                 create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-6689                      17s         Normal    Pulling                    pod/csi-snapshotter-0                                                          Pulling image \"quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2\"\nephemeral-6689                      19s         Normal    SuccessfulCreate           statefulset/csi-snapshotter                                                    create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-6689                      11s         Warning   FailedMount                pod/inline-volume-tester-2s74d                                                 MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-6689 not found in the list of registered CSI drivers\nkube-system                         6m11s       Warning   FailedScheduling           pod/coredns-6955765f44-mxkvk                                                   0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m53s       Warning   FailedScheduling           pod/coredns-6955765f44-mxkvk                                                   0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m27s       Warning   FailedScheduling           pod/coredns-6955765f44-mxkvk                                                   0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m22s       Normal    Scheduled                  pod/coredns-6955765f44-mxkvk                                                   Successfully assigned kube-system/coredns-6955765f44-mxkvk to kind-control-plane\nkube-system                         5m16s       Normal    Pulled                     pod/coredns-6955765f44-mxkvk                                                   Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                         5m15s       Normal    Created                    pod/coredns-6955765f44-mxkvk                                                   Created container coredns\nkube-system                         5m15s       Normal    Started                    pod/coredns-6955765f44-mxkvk                                                   Started container coredns\nkube-system                         6m11s       Warning   FailedScheduling           pod/coredns-6955765f44-v49tc                                                   0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m53s       Warning   FailedScheduling           pod/coredns-6955765f44-v49tc                                                   0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m27s       Warning   FailedScheduling           pod/coredns-6955765f44-v49tc                                                   0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\nkube-system                         5m22s       Normal    Scheduled                  pod/coredns-6955765f44-v49tc                                                   Successfully assigned kube-system/coredns-6955765f44-v49tc to kind-control-plane\nkube-system                         5m16s       Normal    Pulled                     pod/coredns-6955765f44-v49tc                                                   Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                         5m15s       Normal    Created                    pod/coredns-6955765f44-v49tc                                                   Created container coredns\nkube-system                         5m15s       Normal    Started                    pod/coredns-6955765f44-v49tc                                                   Started container coredns\nkube-system                         6m11s       Normal    SuccessfulCreate           replicaset/coredns-6955765f44                                                  Created pod: coredns-6955765f44-v49tc\nkube-system                         6m11s       Normal    SuccessfulCreate           replicaset/coredns-6955765f44                                                  Created pod: coredns-6955765f44-mxkvk\nkube-system                         6m11s       Normal    ScalingReplicaSet          deployment/coredns                                                             Scaled up replica set coredns-6955765f44 to 2\nkube-system                         5m53s       Normal    Scheduled                  pod/kindnet-krxhw                                                              Successfully assigned kube-system/kindnet-krxhw to kind-worker\nkube-system                         5m52s       Normal    Pulling                    pod/kindnet-krxhw                                                              Pulling image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m48s       Normal    Pulled                     pod/kindnet-krxhw                                                              Successfully pulled image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m47s       Normal    Created                    pod/kindnet-krxhw                                                              Created container kindnet-cni\nkube-system                         5m47s       Normal    Started                    pod/kindnet-krxhw                                                              Started container kindnet-cni\nkube-system                         6m11s       Normal    Scheduled                  pod/kindnet-lnv5z                                                              Successfully assigned kube-system/kindnet-lnv5z to kind-control-plane\nkube-system                         6m10s       Normal    Pulling                    pod/kindnet-lnv5z                                                              Pulling image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         6m8s        Normal    Pulled                     pod/kindnet-lnv5z                                                              Successfully pulled image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         6m7s        Normal    Created                    pod/kindnet-lnv5z                                                              Created container kindnet-cni\nkube-system                         6m7s        Normal    Started                    pod/kindnet-lnv5z                                                              Started container kindnet-cni\nkube-system                         5m52s       Normal    Scheduled                  pod/kindnet-rmvhf                                                              Successfully assigned kube-system/kindnet-rmvhf to kind-worker2\nkube-system                         5m51s       Normal    Pulling                    pod/kindnet-rmvhf                                                              Pulling image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m47s       Normal    Pulled                     pod/kindnet-rmvhf                                                              Successfully pulled image \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\nkube-system                         5m47s       Normal    Created                    pod/kindnet-rmvhf                                                              Created container kindnet-cni\nkube-system                         5m47s       Normal    Started                    pod/kindnet-rmvhf                                                              Started container kindnet-cni\nkube-system                         6m11s       Normal    SuccessfulCreate           daemonset/kindnet                                                              Created pod: kindnet-lnv5z\nkube-system                         5m53s       Normal    SuccessfulCreate           daemonset/kindnet                                                              Created pod: kindnet-krxhw\nkube-system                         5m52s       Normal    SuccessfulCreate           daemonset/kindnet                                                              Created pod: kindnet-rmvhf\nkube-system                         6m27s       Normal    LeaderElection             endpoints/kube-controller-manager                                              kind-control-plane_cb68fe76-a826-4e72-81b7-ec45e90de04d became leader\nkube-system                         6m27s       Normal    LeaderElection             lease/kube-controller-manager                                                  kind-control-plane_cb68fe76-a826-4e72-81b7-ec45e90de04d became leader\nkube-system                         5m53s       Normal    Scheduled                  pod/kube-proxy-m22kv                                                           Successfully assigned kube-system/kube-proxy-m22kv to kind-worker\nkube-system                         5m52s       Normal    Pulled                     pod/kube-proxy-m22kv                                                           Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\" already present on machine\nkube-system                         5m50s       Normal    Created                    pod/kube-proxy-m22kv                                                           Created container kube-proxy\nkube-system                         5m50s       Normal    Started                    pod/kube-proxy-m22kv                                                           Started container kube-proxy\nkube-system                         5m52s       Normal    Scheduled                  pod/kube-proxy-v8fsf                                                           Successfully assigned kube-system/kube-proxy-v8fsf to kind-worker2\nkube-system                         5m51s       Normal    Pulled                     pod/kube-proxy-v8fsf                                                           Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\" already present on machine\nkube-system                         5m49s       Normal    Created                    pod/kube-proxy-v8fsf                                                           Created container kube-proxy\nkube-system                         5m49s       Normal    Started                    pod/kube-proxy-v8fsf                                                           Started container kube-proxy\nkube-system                         6m11s       Normal    Scheduled                  pod/kube-proxy-vjhtv                                                           Successfully assigned kube-system/kube-proxy-vjhtv to kind-control-plane\nkube-system                         6m10s       Normal    Pulled                     pod/kube-proxy-vjhtv                                                           Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\" already present on machine\nkube-system                         6m9s        Normal    Created                    pod/kube-proxy-vjhtv                                                           Created container kube-proxy\nkube-system                         6m9s        Normal    Started                    pod/kube-proxy-vjhtv                                                           Started container kube-proxy\nkube-system                         6m11s       Normal    SuccessfulCreate           daemonset/kube-proxy                                                           Created pod: kube-proxy-vjhtv\nkube-system                         5m53s       Normal    SuccessfulCreate           daemonset/kube-proxy                                                           Created pod: kube-proxy-m22kv\nkube-system                         5m52s       Normal    SuccessfulCreate           daemonset/kube-proxy                                                           Created pod: kube-proxy-v8fsf\nkube-system                         6m28s       Normal    LeaderElection             endpoints/kube-scheduler                                                       kind-control-plane_124e1804-3c2f-4bce-88f9-cec3a614addb became leader\nkube-system                         6m28s       Normal    LeaderElection             lease/kube-scheduler                                                           kind-control-plane_124e1804-3c2f-4bce-88f9-cec3a614addb became leader\nkubectl-1391                        18s         Normal    Scheduled                  pod/httpd                                                                      Successfully assigned kubectl-1391/httpd to kind-worker\nkubectl-1391                        16s         Normal    Pulled                     pod/httpd                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nkubectl-1391                        16s         Normal    Created                    pod/httpd                                                                      Created container httpd\nkubectl-1391                        16s         Normal    Started                    pod/httpd                                                                      Started container httpd\nkubectl-1391                        1s          Normal    Scheduled                  pod/run-log-test                                                               Successfully assigned kubectl-1391/run-log-test to kind-worker\nkubectl-1391                        1s          Normal    Pulled                     pod/run-log-test                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nkubectl-1391                        1s          Normal    Created                    pod/run-log-test                                                               Created container run-log-test\nkubectl-1391                        1s          Normal    Started                    pod/run-log-test                                                               Started container run-log-test\nkubectl-6642                        1s          Normal    Scheduled                  pod/deployment4lwl4k8hvr9-87fd78899-lvvl9                                      Successfully assigned kubectl-6642/deployment4lwl4k8hvr9-87fd78899-lvvl9 to kind-worker\nkubectl-6642                        1s          Normal    SuccessfulCreate           replicaset/deployment4lwl4k8hvr9-87fd78899                                     Created pod: deployment4lwl4k8hvr9-87fd78899-lvvl9\nkubectl-6642                        1s          Normal    ScalingReplicaSet          deployment/deployment4lwl4k8hvr9                                               Scaled up replica set deployment4lwl4k8hvr9-87fd78899 to 1\nkubectl-6642                        0s          Normal    Scheduled                  pod/ds6lwl4k8hvr9-dt7br                                                        Successfully assigned kubectl-6642/ds6lwl4k8hvr9-dt7br to kind-worker2\nkubectl-6642                        0s          Normal    Scheduled                  pod/ds6lwl4k8hvr9-ft2bp                                                        Successfully assigned kubectl-6642/ds6lwl4k8hvr9-ft2bp to kind-worker\nkubectl-6642                        0s          Normal    SuccessfulCreate           daemonset/ds6lwl4k8hvr9                                                        Created pod: ds6lwl4k8hvr9-ft2bp\nkubectl-6642                        0s          Normal    SuccessfulCreate           daemonset/ds6lwl4k8hvr9                                                        Created pod: ds6lwl4k8hvr9-dt7br\nkubectl-6642                        <unknown>             Laziness                                                                                                  some data here\nkubectl-6642                        2s          Warning   FailedScheduling           pod/pod1lwl4k8hvr9                                                             0/3 nodes are available: 3 Insufficient cpu.\nkubectl-6642                        2s          Warning   FailedScheduling           pod/pod1lwl4k8hvr9                                                             skip schedule deleting pod: kubectl-6642/pod1lwl4k8hvr9\nkubectl-6642                        1s          Normal    ProvisioningSucceeded      persistentvolumeclaim/pvc1lwl4k8hvr9                                           Successfully provisioned volume pvc-65045248-ab25-4959-ac07-60edd5a0d741 using kubernetes.io/host-path\nkubectl-6642                        3s          Normal    Scheduled                  pod/rc1lwl4k8hvr9-bv9mh                                                        Successfully assigned kubectl-6642/rc1lwl4k8hvr9-bv9mh to kind-worker2\nkubectl-6642                        3s          Normal    SuccessfulCreate           replicationcontroller/rc1lwl4k8hvr9                                            Created pod: rc1lwl4k8hvr9-bv9mh\nkubectl-6642                        1s          Normal    Scheduled                  pod/rs3lwl4k8hvr9-7qn64                                                        Successfully assigned kubectl-6642/rs3lwl4k8hvr9-7qn64 to kind-worker\nkubectl-6642                        1s          Normal    SuccessfulCreate           replicaset/rs3lwl4k8hvr9                                                       Created pod: rs3lwl4k8hvr9-7qn64\nkubectl-6642                        0s          Warning   FailedCreate               statefulset/ss3lwl4k8hvr9                                                      create Pod ss3lwl4k8hvr9-0 in StatefulSet ss3lwl4k8hvr9 failed error: Pod \"ss3lwl4k8hvr9-0\" is invalid: spec.containers: Required value\nkubectl-75                          13s         Normal    Scheduled                  pod/pause                                                                      Successfully assigned kubectl-75/pause to kind-worker\nkubectl-75                          11s         Normal    Pulled                     pod/pause                                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nkubectl-75                          11s         Normal    Created                    pod/pause                                                                      Created container pause\nkubectl-75                          11s         Normal    Started                    pod/pause                                                                      Started container pause\nnettest-3642                        20s         Normal    Scheduled                  pod/netserver-0                                                                Successfully assigned nettest-3642/netserver-0 to kind-worker\nnettest-3642                        20s         Normal    Pulled                     pod/netserver-0                                                                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-3642                        20s         Normal    Created                    pod/netserver-0                                                                Created container webserver\nnettest-3642                        19s         Normal    Started                    pod/netserver-0                                                                Started container webserver\nnettest-3642                        20s         Normal    Scheduled                  pod/netserver-1                                                                Successfully assigned nettest-3642/netserver-1 to kind-worker2\nnettest-3642                        20s         Normal    Pulled                     pod/netserver-1                                                                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-3642                        20s         Normal    Created                    pod/netserver-1                                                                Created container webserver\nnettest-3642                        19s         Normal    Started                    pod/netserver-1                                                                Started container webserver\npersistent-local-volumes-test-737   11s         Normal    Pulled                     pod/hostexec-kind-worker-bdcsn                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-737   11s         Normal    Created                    pod/hostexec-kind-worker-bdcsn                                                 Created container agnhost\npersistent-local-volumes-test-737   10s         Normal    Started                    pod/hostexec-kind-worker-bdcsn                                                 Started container agnhost\npersistent-local-volumes-test-737   1s          Warning   ProvisioningFailed         persistentvolumeclaim/pvc-ltv6k                                                no volume plugin matched\nprestop-2981                        4s          Normal    Scheduled                  pod/server                                                                     Successfully assigned prestop-2981/server to kind-worker\nprestop-2981                        3s          Normal    Pulled                     pod/server                                                                     Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprestop-2981                        3s          Normal    Created                    pod/server                                                                     Created container server\nprestop-2981                        2s          Normal    Started                    pod/server                                                                     Started container server\nprovisioning-4446                   12s         Normal    Pulled                     pod/pod-subpath-test-hostpath-njdp                                             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4446                   12s         Normal    Created                    pod/pod-subpath-test-hostpath-njdp                                             Created container test-init-subpath-hostpath-njdp\nprovisioning-4446                   12s         Normal    Started                    pod/pod-subpath-test-hostpath-njdp                                             Started container test-init-subpath-hostpath-njdp\nprovisioning-4446                   12s         Normal    Pulled                     pod/pod-subpath-test-hostpath-njdp                                             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4446                   11s         Normal    Created                    pod/pod-subpath-test-hostpath-njdp                                             Created container test-container-subpath-hostpath-njdp\nprovisioning-4446                   11s         Normal    Started                    pod/pod-subpath-test-hostpath-njdp                                             Started container test-container-subpath-hostpath-njdp\nprovisioning-4446                   11s         Normal    Pulled                     pod/pod-subpath-test-hostpath-njdp                                             Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nprovisioning-4446                   11s         Normal    Created                    pod/pod-subpath-test-hostpath-njdp                                             Created container test-container-volume-hostpath-njdp\nprovisioning-4446                   11s         Normal    Started                    pod/pod-subpath-test-hostpath-njdp                                             Started container test-container-volume-hostpath-njdp\nprovisioning-4787                   7s          Normal    Pulled                     pod/hostexec-kind-worker-tjwzx                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nprovisioning-4787                   7s          Normal    Created                    pod/hostexec-kind-worker-tjwzx                                                 Created container agnhost\nprovisioning-4787                   6s          Normal    Started                    pod/hostexec-kind-worker-tjwzx                                                 Started container agnhost\nreplicaset-7023                     13s         Normal    Scheduled                  pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Successfully assigned replicaset-7023/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds to kind-worker2\nreplicaset-7023                     12s         Normal    Pulled                     pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nreplicaset-7023                     12s         Normal    Created                    pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Created container my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751\nreplicaset-7023                     12s         Normal    Started                    pod/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds               Started container my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751\nreplicaset-7023                     13s         Normal    SuccessfulCreate           replicaset/my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751              Created pod: my-hostname-basic-b6cc10ba-3723-4625-b50c-4dbf0b733751-jjxds\nreplication-controller-1172         22s         Normal    Scheduled                  pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Successfully assigned replication-controller-1172/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg to kind-worker\nreplication-controller-1172         21s         Normal    Pulled                     pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nreplication-controller-1172         21s         Normal    Created                    pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Created container my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f\nreplication-controller-1172         21s         Normal    Started                    pod/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg               Started container my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f\nreplication-controller-1172         22s         Normal    SuccessfulCreate           replicationcontroller/my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f   Created pod: my-hostname-basic-75d95e19-fb08-48d9-9e5e-a8a9e21bb21f-648mg\nresourcequota-777                   4s          Normal    Scheduled                  pod/pfpod                                                                      Successfully assigned resourcequota-777/pfpod to kind-worker\nresourcequota-777                   3s          Normal    Pulled                     pod/pfpod                                                                      Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nresourcequota-777                   3s          Normal    Created                    pod/pfpod                                                                      Created container pause\nresourcequota-777                   2s          Normal    Started                    pod/pfpod                                                                      Started container pause\nresourcequota-777                   0s          Normal    Killing                    pod/pfpod                                                                      Stopping container pause\nsched-preemption-path-3542          13s         Warning   FailedScheduling           pod/rs-pod1-79qbp                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          10s         Normal    Scheduled                  pod/rs-pod1-79qbp                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-79qbp to kind-worker2\nsched-preemption-path-3542          9s          Normal    Pulled                     pod/rs-pod1-79qbp                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          9s          Normal    Created                    pod/rs-pod1-79qbp                                                              Created container pod1\nsched-preemption-path-3542          13s         Warning   FailedScheduling           pod/rs-pod1-fjgjk                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          10s         Normal    Scheduled                  pod/rs-pod1-fjgjk                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-fjgjk to kind-worker2\nsched-preemption-path-3542          9s          Normal    Pulled                     pod/rs-pod1-fjgjk                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          9s          Normal    Created                    pod/rs-pod1-fjgjk                                                              Created container pod1\nsched-preemption-path-3542          13s         Warning   FailedScheduling           pod/rs-pod1-pw7gf                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          10s         Normal    Scheduled                  pod/rs-pod1-pw7gf                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-pw7gf to kind-worker2\nsched-preemption-path-3542          9s          Normal    Pulled                     pod/rs-pod1-pw7gf                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          9s          Normal    Created                    pod/rs-pod1-pw7gf                                                              Created container pod1\nsched-preemption-path-3542          13s         Warning   FailedScheduling           pod/rs-pod1-skpmm                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          10s         Normal    Scheduled                  pod/rs-pod1-skpmm                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-skpmm to kind-worker2\nsched-preemption-path-3542          9s          Normal    Pulled                     pod/rs-pod1-skpmm                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          9s          Normal    Created                    pod/rs-pod1-skpmm                                                              Created container pod1\nsched-preemption-path-3542          13s         Warning   FailedScheduling           pod/rs-pod1-vs9mm                                                              0/3 nodes are available: 3 Insufficient example.com/fakecpu.\nsched-preemption-path-3542          10s         Normal    Scheduled                  pod/rs-pod1-vs9mm                                                              Successfully assigned sched-preemption-path-3542/rs-pod1-vs9mm to kind-worker2\nsched-preemption-path-3542          9s          Normal    Pulled                     pod/rs-pod1-vs9mm                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          9s          Normal    Created                    pod/rs-pod1-vs9mm                                                              Created container pod1\nsched-preemption-path-3542          14s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-skpmm\nsched-preemption-path-3542          14s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-79qbp\nsched-preemption-path-3542          14s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-vs9mm\nsched-preemption-path-3542          14s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-pw7gf\nsched-preemption-path-3542          14s         Normal    SuccessfulCreate           replicaset/rs-pod1                                                             Created pod: rs-pod1-fjgjk\nsched-preemption-path-3542          25s         Normal    Scheduled                  pod/without-label                                                              Successfully assigned sched-preemption-path-3542/without-label to kind-worker2\nsched-preemption-path-3542          23s         Normal    Pulled                     pod/without-label                                                              Container image \"k8s.gcr.io/pause:3.1\" already present on machine\nsched-preemption-path-3542          23s         Normal    Created                    pod/without-label                                                              Created container without-label\nsched-preemption-path-3542          23s         Normal    Started                    pod/without-label                                                              Started container without-label\nsched-preemption-path-3542          14s         Normal    Killing                    pod/without-label                                                              Stopping container without-label\nservices-3872                       8s          Normal    Scheduled                  pod/hairpin                                                                    Successfully assigned services-3872/hairpin to kind-worker2\nstatefulset-1303                    18s         Normal    ProvisioningSucceeded      persistentvolumeclaim/datadir-ss-0                                             Successfully provisioned volume pvc-2cd79c50-72b6-40f7-8ccd-31723c5c915d using kubernetes.io/host-path\nstatefulset-1303                    18s         Warning   FailedScheduling           pod/ss-0                                                                       error while running \"VolumeBinding\" filter plugin for pod \"ss-0\": pod has unbound immediate PersistentVolumeClaims\nstatefulset-1303                    17s         Normal    Scheduled                  pod/ss-0                                                                       Successfully assigned statefulset-1303/ss-0 to kind-worker\nstatefulset-1303                    15s         Normal    Pulled                     pod/ss-0                                                                       Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-1303                    15s         Normal    Created                    pod/ss-0                                                                       Created container webserver\nstatefulset-1303                    14s         Normal    Started                    pod/ss-0                                                                       Started container webserver\nstatefulset-1303                    1s          Warning   Unhealthy                  pod/ss-0                                                                       Readiness probe failed:\nstatefulset-1303                    18s         Normal    SuccessfulCreate           statefulset/ss                                                                 create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-1303                    18s         Normal    SuccessfulCreate           statefulset/ss                                                                 create Pod ss-0 in StatefulSet ss successful\nstatefulset-2689                    9s          Normal    Scheduled                  pod/ss2-0                                                                      Successfully assigned statefulset-2689/ss2-0 to kind-worker2\nstatefulset-2689                    9s          Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-4148                    20s         Warning   PodFitsHostPorts           pod/ss-0                                                                       Predicate PodFitsHostPorts failed\nstatefulset-4148                    0s          Normal    SuccessfulCreate           statefulset/ss                                                                 create Pod ss-0 in StatefulSet ss successful\nstatefulset-4148                    0s          Warning   RecreatingFailedPod        statefulset/ss                                                                 StatefulSet statefulset-4148/ss is recreating failed Pod ss-0\nstatefulset-4148                    0s          Normal    SuccessfulDelete           statefulset/ss                                                                 delete Pod ss-0 in StatefulSet ss successful\nstatefulset-4148                    0s          Warning   FailedCreate               statefulset/ss                                                                 create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\nstatefulset-4148                    19s         Normal    Pulled                     pod/test-pod                                                                   Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-4148                    19s         Normal    Created                    pod/test-pod                                                                   Created container webserver\nstatefulset-4148                    19s         Normal    Started                    pod/test-pod                                                                   Started container webserver\nstatefulset-8098                    4m25s       Normal    Scheduled                  pod/ss2-0                                                                      Successfully assigned statefulset-8098/ss2-0 to kind-worker2\nstatefulset-8098                    4m24s       Normal    Pulled                     pod/ss2-0                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    4m24s       Normal    Created                    pod/ss2-0                                                                      Created container webserver\nstatefulset-8098                    4m24s       Normal    Started                    pod/ss2-0                                                                      Started container webserver\nstatefulset-8098                    2m4s        Normal    Killing                    pod/ss2-0                                                                      Stopping container webserver\nstatefulset-8098                    110s        Normal    Scheduled                  pod/ss2-0                                                                      Successfully assigned statefulset-8098/ss2-0 to kind-worker2\nstatefulset-8098                    109s        Normal    Pulled                     pod/ss2-0                                                                      Container image \"docker.io/library/httpd:2.4.39-alpine\" already present on machine\nstatefulset-8098                    109s        Normal    Created                    pod/ss2-0                                                                      Created container webserver\nstatefulset-8098                    109s        Normal    Started                    pod/ss2-0                                                                      Started container webserver\nstatefulset-8098                    4m14s       Normal    Scheduled                  pod/ss2-1                                                                      Successfully assigned statefulset-8098/ss2-1 to kind-worker\nstatefulset-8098                    4m13s       Normal    Pulled                     pod/ss2-1                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    4m13s       Normal    Created                    pod/ss2-1                                                                      Created container webserver\nstatefulset-8098                    4m12s       Normal    Started                    pod/ss2-1                                                                      Started container webserver\nstatefulset-8098                    3m24s       Warning   Unhealthy                  pod/ss2-1                                                                      Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-8098                    2m24s       Normal    Scheduled                  pod/ss2-1                                                                      Successfully assigned statefulset-8098/ss2-1 to kind-worker2\nstatefulset-8098                    2m23s       Normal    Pulling                    pod/ss2-1                                                                      Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m17s       Normal    Pulled                     pod/ss2-1                                                                      Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m17s       Normal    Created                    pod/ss2-1                                                                      Created container webserver\nstatefulset-8098                    2m16s       Normal    Started                    pod/ss2-1                                                                      Started container webserver\nstatefulset-8098                    74s         Warning   Unhealthy                  pod/ss2-1                                                                      Readiness probe failed: HTTP probe failed with statuscode: 404\nstatefulset-8098                    22s         Normal    Scheduled                  pod/ss2-1                                                                      Successfully assigned statefulset-8098/ss2-1 to kind-worker2\nstatefulset-8098                    21s         Normal    Pulled                     pod/ss2-1                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    21s         Normal    Created                    pod/ss2-1                                                                      Created container webserver\nstatefulset-8098                    21s         Normal    Started                    pod/ss2-1                                                                      Started container webserver\nstatefulset-8098                    4m          Normal    Scheduled                  pod/ss2-2                                                                      Successfully assigned statefulset-8098/ss2-2 to kind-worker2\nstatefulset-8098                    3m59s       Normal    Pulled                     pod/ss2-2                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    3m59s       Normal    Created                    pod/ss2-2                                                                      Created container webserver\nstatefulset-8098                    3m59s       Normal    Started                    pod/ss2-2                                                                      Started container webserver\nstatefulset-8098                    3m6s        Normal    Killing                    pod/ss2-2                                                                      Stopping container webserver\nstatefulset-8098                    3m6s        Warning   Unhealthy                  pod/ss2-2                                                                      Readiness probe failed: Get http://10.244.2.48:80/index.html: dial tcp 10.244.2.48:80: connect: connection refused\nstatefulset-8098                    2m52s       Normal    Scheduled                  pod/ss2-2                                                                      Successfully assigned statefulset-8098/ss2-2 to kind-worker\nstatefulset-8098                    2m50s       Normal    Pulling                    pod/ss2-2                                                                      Pulling image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m41s       Normal    Pulled                     pod/ss2-2                                                                      Successfully pulled image \"docker.io/library/httpd:2.4.39-alpine\"\nstatefulset-8098                    2m41s       Normal    Created                    pod/ss2-2                                                                      Created container webserver\nstatefulset-8098                    2m41s       Normal    Started                    pod/ss2-2                                                                      Started container webserver\nstatefulset-8098                    51s         Normal    Killing                    pod/ss2-2                                                                      Stopping container webserver\nstatefulset-8098                    49s         Warning   FailedKillPod              pod/ss2-2                                                                      error killing pod: failed to \"KillPodSandbox\" for \"cf2c451d-dc4e-4037-8c81-c876fc569d4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"290e702a271bec1d19be7fb320b704c67b95a195f150d30e9c683d58279e7d63\\\": could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-1afcf3c2c7d30736b8246 --wait]: exit status 1: iptables: No chain/target/match by that name.\\n\"\nstatefulset-8098                    38s         Normal    Scheduled                  pod/ss2-2                                                                      Successfully assigned statefulset-8098/ss2-2 to kind-worker\nstatefulset-8098                    36s         Normal    Pulled                     pod/ss2-2                                                                      Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-8098                    36s         Normal    Created                    pod/ss2-2                                                                      Created container webserver\nstatefulset-8098                    35s         Normal    Started                    pod/ss2-2                                                                      Started container webserver\nstatefulset-8098                    110s        Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-8098                    22s         Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-8098                    38s         Normal    SuccessfulCreate           statefulset/ss2                                                                create Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-8098                    51s         Normal    SuccessfulDelete           statefulset/ss2                                                                delete Pod ss2-2 in StatefulSet ss2 successful\nstatefulset-8098                    33s         Normal    SuccessfulDelete           statefulset/ss2                                                                delete Pod ss2-1 in StatefulSet ss2 successful\nstatefulset-8098                    6s          Normal    SuccessfulDelete           statefulset/ss2                                                                delete Pod ss2-0 in StatefulSet ss2 successful\nstatefulset-8098                    4m          Warning   FailedToUpdateEndpoint     endpoints/test                                                                 Failed to update endpoint statefulset-8098/test: Operation cannot be fulfilled on endpoints \"test\": the object has been modified; please apply your changes to the latest version and try again\nvolume-1413                         72s         Normal    Pulled                     pod/hostexec-kind-worker-s6t4k                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-1413                         72s         Normal    Created                    pod/hostexec-kind-worker-s6t4k                                                 Created container agnhost\nvolume-1413                         72s         Normal    Started                    pod/hostexec-kind-worker-s6t4k                                                 Started container agnhost\nvolume-1413                         17s         Normal    Pulled                     pod/local-client                                                               Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1413                         17s         Normal    Created                    pod/local-client                                                               Created container local-client\nvolume-1413                         16s         Normal    Started                    pod/local-client                                                               Started container local-client\nvolume-1413                         3s          Normal    Killing                    pod/local-client                                                               Stopping container local-client\nvolume-1413                         54s         Normal    Pulled                     pod/local-injector                                                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-1413                         54s         Normal    Created                    pod/local-injector                                                             Created container local-injector\nvolume-1413                         54s         Normal    Started                    pod/local-injector                                                             Started container local-injector\nvolume-1413                         44s         Normal    Killing                    pod/local-injector                                                             Stopping container local-injector\nvolume-1413                         61s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-snhrz                                                storageclass.storage.k8s.io \"volume-1413\" not found\nvolumemode-3978                     29s         Normal    Pulled                     pod/hostexec-kind-worker-trf9m                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-3978                     29s         Normal    Created                    pod/hostexec-kind-worker-trf9m                                                 Created container agnhost\nvolumemode-3978                     29s         Normal    Started                    pod/hostexec-kind-worker-trf9m                                                 Started container agnhost\nvolumemode-3978                     16s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-c4vzx                                                storageclass.storage.k8s.io \"volumemode-3978\" not found\nvolumemode-3978                     10s         Normal    Scheduled                  pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Successfully assigned volumemode-3978/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e to kind-worker\nvolumemode-3978                     10s         Normal    Pulled                     pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-3978                     10s         Normal    Created                    pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Created container write-pod\nvolumemode-3978                     9s          Normal    Started                    pod/security-context-fac0d0e3-1388-4379-8ed7-c144706f952e                      Started container write-pod\nvolumemode-7769                     42s         Normal    Pulled                     pod/hostexec-kind-worker-cn5xr                                                 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolumemode-7769                     42s         Normal    Created                    pod/hostexec-kind-worker-cn5xr                                                 Created container agnhost\nvolumemode-7769                     41s         Normal    Started                    pod/hostexec-kind-worker-cn5xr                                                 Started container agnhost\nvolumemode-7769                     23s         Warning   ProvisioningFailed         persistentvolumeclaim/pvc-7h75l                                                storageclass.storage.k8s.io \"volumemode-7769\" not found\nvolumemode-7769                     9s          Normal    Scheduled                  pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Successfully assigned volumemode-7769/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14 to kind-worker\nvolumemode-7769                     8s          Normal    Pulled                     pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolumemode-7769                     8s          Normal    Created                    pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Created container write-pod\nvolumemode-7769                     8s          Normal    Started                    pod/security-context-8a718d0c-90f7-4d00-a39f-70131b82ac14                      Started container write-pod\n"
Nov 22 03:30:24.069: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get horizontalpodautoscalers --all-namespaces'
Nov 22 03:30:24.185: INFO: stderr: ""
Nov 22 03:30:24.185: INFO: stdout: "NAMESPACE      NAME             REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE\nkubectl-6642   hpa2lwl4k8hvr9   something/cross   <unknown>/80%   1         3         0          0s\n"
Nov 22 03:30:24.220: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config get jobs --all-namespaces'
Nov 22 03:30:24.342: INFO: stderr: ""
Nov 22 03:30:24.342: INFO: stdout: "NAMESPACE      NAME             COMPLETIONS   DURATION   AGE\nkubectl-6642   job1lwl4k8hvr9   0/1           0s         0s\n"
... skipping 65 lines ...
test/e2e/kubectl/framework.go:23
  kubectl get output
  test/e2e/kubectl/kubectl.go:422
    should contain custom columns for each resource
    test/e2e/kubectl/kubectl.go:423
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl get output should contain custom columns for each resource","total":-1,"completed":11,"skipped":99,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:29.829: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:29.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 63 lines ...
  Only supported for providers [vsphere] (not skeleton)

  test/e2e/storage/vsphere/vsphere_zone_support.go:102
------------------------------
SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":12,"skipped":83,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:30:10.584: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:20.292 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:30.884: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 72 lines ...
• [SLOW TEST:22.375 seconds]
[k8s.io] KubeletManagedEtcHosts
test/e2e/framework/framework.go:629
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":57,"failed":0}
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:34
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:30:31.081: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 8 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:30:31.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-8629" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":12,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:16.550 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":12,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 30 lines ...
test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  test/e2e/network/service.go:390
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":10,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:31.873: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 139 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:32.986: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 13 lines ...
      test/e2e/storage/testsuites/volumemode.go:333

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      test/e2e/storage/drivers/in_tree.go:258
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":10,"skipped":54,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:30:20.448: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
• [SLOW TEST:8.883 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  test/e2e/network/dns.go:86
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":12,"skipped":108,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:36.190: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 108 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should contain last line of the log
    test/e2e/kubectl/kubectl.go:716
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":9,"skipped":68,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:30:39.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-7356" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass","total":-1,"completed":10,"skipped":76,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:8.394 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:629
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":59,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:8.445 seconds]
[sig-storage] Projected combined
test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":96,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:40.334: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 81 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:226
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":13,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:14.727 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  test/e2e/apps/deployment.go:74
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":12,"skipped":120,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:25.441 seconds]
[k8s.io] [sig-node] PreStop
test/e2e/framework/framework.go:629
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":11,"skipped":114,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:45.025: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 42 lines ...
  test/e2e/common/runtime.go:38
    when running a container with a new image
    test/e2e/common/runtime.go:263
      should be able to pull image [NodeConformance]
      test/e2e/common/runtime.go:374
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":7,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:247.991 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:629
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
  test/e2e/common/container_probe.go:166
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]","total":-1,"completed":5,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 63 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:100
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:333
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":10,"skipped":103,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 77 lines ...
• [SLOW TEST:20.847 seconds]
[sig-api-machinery] Servers with support for API chunking
test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":14,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 55 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:100
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:333
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":9,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] HostPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:7.043 seconds]
[sig-storage] HostPath
test/e2e/common/host_path.go:34
  should support subPath [NodeConformance]
  test/e2e/common/host_path.go:91
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":12,"skipped":123,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:52.081: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 127 lines ...
• [SLOW TEST:12.712 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:60.263 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:52.825: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:30:52.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 73 lines ...
• [SLOW TEST:14.313 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:30:55.980: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:30:55.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 25 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:150
Nov 22 03:30:56.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":15,"skipped":104,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:18.281 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  test/e2e/apps/disruption.go:149
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":11,"skipped":79,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 55 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should support exec using resource/name
    test/e2e/kubectl/kubectl.go:556
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":13,"skipped":111,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":12,"skipped":124,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:30:05.140: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
• [SLOW TEST:10.387 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:10.244 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:73
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":34,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:18.272 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 47 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  test/e2e/kubectl/kubectl.go:1925
    should create a job from an image, then delete the job  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":-1,"completed":15,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:07.548: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 95 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:225
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":71,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":13,"skipped":124,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:31:02.354: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
• [SLOW TEST:10.164 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:250.128 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:629
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":35,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:12.981: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 98 lines ...
• [SLOW TEST:38.320 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:629
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":117,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4148
STEP: Creating statefulset with conflicting port in namespace statefulset-4148
STEP: Waiting until pod test-pod will start running in namespace statefulset-4148
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4148
Nov 22 03:30:17.802: INFO: Observed stateful pod in namespace: statefulset-4148, name: ss-0, uid: 6191f543-adb5-4320-882c-f41c58d5fcf0, status phase: Pending. Waiting for statefulset controller to delete.
Nov 22 03:30:23.431: INFO: Observed stateful pod in namespace: statefulset-4148, name: ss-0, uid: 6191f543-adb5-4320-882c-f41c58d5fcf0, status phase: Failed. Waiting for statefulset controller to delete.
Nov 22 03:30:23.455: INFO: Observed stateful pod in namespace: statefulset-4148, name: ss-0, uid: 6191f543-adb5-4320-882c-f41c58d5fcf0, status phase: Failed. Waiting for statefulset controller to delete.
Nov 22 03:30:23.486: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4148
STEP: Removing pod with conflicting port in namespace statefulset-4148
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4148 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:90
Nov 22 03:30:56.035: INFO: Deleting all statefulset in ns statefulset-4148
... skipping 11 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:629
    Should recreate evicted statefulset [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:16.203: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:150
Nov 22 03:31:16.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":11,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:30:34.098: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 67 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:225
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":12,"skipped":54,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:21.174: INFO: Only supported for providers [aws] (not skeleton)
... skipping 149 lines ...
• [SLOW TEST:16.171 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:23.016: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 47 lines ...
• [SLOW TEST:16.200 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  test/e2e/apps/disruption.go:200
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":16,"skipped":106,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:12.161 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/configmap_volume.go:111
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":69,"failed":0}
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:31:25.246: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:634
STEP: Creating projection with secret that has name secret-emptykey-test-0f016a9d-8b81-465a-83fe-87d51a1865ea
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:150
Nov 22 03:31:25.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8831" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 66 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":16,"skipped":105,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 93 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":13,"skipped":136,"failed":0}

SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:25:57.961: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 99 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:629
    should perform rolling updates and roll backs of template modifications [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:29.630: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:31:29.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 38 lines ...
• [SLOW TEST:17.143 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":15,"skipped":125,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Volume Placement
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 138 lines ...
• [SLOW TEST:30.608 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":14,"skipped":118,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:30.130: INFO: Only supported for providers [azure] (not skeleton)
... skipping 50 lines ...
• [SLOW TEST:8.114 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":114,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 37 lines ...
Nov 22 03:30:52.525: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-9f7dl] to have phase Bound
Nov 22 03:30:52.536: INFO: PersistentVolumeClaim pvc-9f7dl found but phase is Pending instead of Bound.
Nov 22 03:30:54.541: INFO: PersistentVolumeClaim pvc-9f7dl found but phase is Pending instead of Bound.
Nov 22 03:30:56.549: INFO: PersistentVolumeClaim pvc-9f7dl found but phase is Pending instead of Bound.
Nov 22 03:30:58.554: INFO: PersistentVolumeClaim pvc-9f7dl found and phase=Bound (6.028371822s)
STEP: checking for CSIInlineVolumes feature
Nov 22 03:31:10.598: INFO: Error getting logs for pod csi-inline-volume-5zb2s: the server rejected our request for an unknown reason (get pods csi-inline-volume-5zb2s)
STEP: Deleting pod csi-inline-volume-5zb2s in namespace csi-mock-volumes-9663
STEP: Deleting the previously created pod
Nov 22 03:31:20.637: INFO: Deleting pod "pvc-volume-tester-bctq4" in namespace "csi-mock-volumes-9663"
Nov 22 03:31:20.643: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bctq4" to be fully deleted
WARNING: pod log: pvc-volume-tester-bctq4/volume-tester: pods "pvc-volume-tester-bctq4" not found
STEP: Checking CSI driver logs
Nov 22 03:31:30.664: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9663","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9663","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9663","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9663","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-ec741053-e6c6-4b71-9317-a08888a9f587","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-ec741053-e6c6-4b71-9317-a08888a9f587"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-9663","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-ec741053-e6c6-4b71-9317-a08888a9f587","storage.kubernetes.io/csiProvisionerIdentity":"1574393456470-8081-csi-mock-csi-mock-volumes-9663"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ec741053-e6c6-4b71-9317-a08888a9f587/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-ec741053-e6c6-4b71-9317-a08888a9f587","storage.kubernetes.io/csiProvisionerIdentity":"1574393456470-8081-csi-mock-csi-mock-volumes-9663"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ec741053-e6c6-4b71-9317-a08888a9f587/globalmount","target_path":"/var/lib/kubelet/pods/2d0b9517-8cd8-44ed-b7f9-f24c9907a8f5/volumes/kubernetes.io~csi/pvc-ec741053-e6c6-4b71-9317-a08888a9f587/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-bctq4","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-9663","csi.storage.k8s.io/pod.uid":"2d0b9517-8cd8-44ed-b7f9-f24c9907a8f5","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-ec741053-e6c6-4b71-9317-a08888a9f587","storage.kubernetes.io/csiProvisionerIdentity":"1574393456470-8081-csi-mock-csi-mock-volumes-9663"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/2d0b9517-8cd8-44ed-b7f9-f24c9907a8f5/volumes/kubernetes.io~csi/pvc-ec741053-e6c6-4b71-9317-a08888a9f587/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2d0b9517-8cd8-44ed-b7f9-f24c9907a8f5/volumes/kubernetes.io~csi/pvc-ec741053-e6c6-4b71-9317-a08888a9f587/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ec741053-e6c6-4b71-9317-a08888a9f587/globalmount"},"Response":{},"Error":""}

Nov 22 03:31:30.664: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Nov 22 03:31:30.664: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-bctq4
Nov 22 03:31:30.664: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9663
Nov 22 03:31:30.664: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 2d0b9517-8cd8-44ed-b7f9-f24c9907a8f5
Nov 22 03:31:30.664: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
... skipping 43 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    should be passed when podInfoOnMount=true
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":10,"skipped":37,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:12.278 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":78,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:33.562: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 32 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:31:33.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-5336" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":11,"skipped":43,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
... skipping 95 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:100
      should create read-only inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:115
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":12,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:18.286 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":9,"skipped":62,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 24 lines ...
test/e2e/common/networking.go:26
  Granular Checks: Pods
  test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":102,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:35.058: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 71 lines ...
test/e2e/framework/framework.go:629
  When creating a container with runAsUser
  test/e2e/common/security_context.go:43
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:92
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:35.431: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:150
Nov 22 03:31:35.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 34 lines ...
• [SLOW TEST:12.087 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:629
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":109,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:39.856: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:31:39.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 46 lines ...
• [SLOW TEST:10.256 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":142,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 63 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":12,"skipped":82,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:40.149: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 85 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":14,"skipped":122,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:41.732: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 69 lines ...
• [SLOW TEST:104.554 seconds]
[sig-scheduling] PreemptionExecutionPath
test/e2e/scheduling/framework.go:40
  runs ReplicaSets to verify preemption running path
  test/e2e/scheduling/preemption.go:307
------------------------------
{"msg":"PASSED [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path","total":-1,"completed":18,"skipped":170,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 14 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:31:43.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3276" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":19,"skipped":187,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:43.535: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 126 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:328
    should do a rolling update of a replication controller  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":-1,"completed":11,"skipped":106,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 67 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:225
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":16,"skipped":74,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:45.543: INFO: Driver gluster doesn't support ext4 -- skipping
... skipping 60 lines ...
STEP: creating execpod-noendpoints on node kind-worker
Nov 22 03:31:30.195: INFO: Creating new exec pod
Nov 22 03:31:46.218: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node kind-worker
Nov 22 03:31:46.218: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1226 execpod-noendpoints7ckcf -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Nov 22 03:31:47.690: INFO: rc: 1
Nov 22 03:31:47.690: INFO: error contained 'REFUSED', as expected: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1226 execpod-noendpoints7ckcf -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:150
Nov 22 03:31:47.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1226" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• [SLOW TEST:17.569 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be rejected when no endpoints exist
  test/e2e/network/service.go:2009
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":15,"skipped":138,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:47.715: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 123 lines ...
• [SLOW TEST:14.303 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_downwardapi.go:105
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":13,"skipped":75,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:48.707: INFO: Only supported for providers [azure] (not skeleton)
... skipping 96 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should support inline execution and attach
    test/e2e/kubectl/kubectl.go:667
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":5,"skipped":41,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 117 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":6,"skipped":9,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:16.270 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":113,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:51.353: INFO: Driver gluster doesn't support ext4 -- skipping
... skipping 130 lines ...
• [SLOW TEST:14.244 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":86,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:54.403: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 44 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:31:54.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7279" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":-1,"completed":14,"skipped":97,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:54.712: INFO: Driver local doesn't support ext3 -- skipping
... skipping 102 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:31:54.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5469" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":15,"skipped":113,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:54.837: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:150
Nov 22 03:31:54.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 124 lines ...
• [SLOW TEST:22.331 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 18 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:31:55.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6142" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":15,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:55.984: INFO: Driver local doesn't support ext4 -- skipping
... skipping 130 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:31:56.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1991" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":-1,"completed":16,"skipped":103,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:31:56.577: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 39 lines ...
test/e2e/framework/framework.go:629
  when scheduling a busybox Pod with hostAliases
  test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:02.987: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:32:02.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 33 lines ...
Nov 22 03:31:57.216: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a" in namespace "pods-1725" to be "terminated due to deadline exceeded"
Nov 22 03:31:57.220: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a": Phase="Running", Reason="", readiness=true. Elapsed: 4.018996ms
Nov 22 03:31:59.226: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a": Phase="Running", Reason="", readiness=true. Elapsed: 2.010229779s
Nov 22 03:32:01.231: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a": Phase="Running", Reason="", readiness=true. Elapsed: 4.01521594s
Nov 22 03:32:03.235: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a": Phase="Running", Reason="", readiness=true. Elapsed: 6.018933095s
Nov 22 03:32:05.238: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a": Phase="Running", Reason="", readiness=true. Elapsed: 8.022106694s
Nov 22 03:32:07.243: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 10.027001542s
Nov 22 03:32:07.243: INFO: Pod "pod-update-activedeadlineseconds-c16400f9-85c6-4440-a7cc-695754424d5a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:150
Nov 22 03:32:07.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1725" for this suite.


• [SLOW TEST:32.748 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:629
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":66,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:07.264: INFO: Driver azure doesn't support ext3 -- skipping
... skipping 197 lines ...
• [SLOW TEST:13.146 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":17,"skipped":112,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 49 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":6,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:14.078: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 29 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:32:15.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:15.186: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 58 lines ...
• [SLOW TEST:26.782 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:16.657: INFO: Driver cinder doesn't support ext4 -- skipping
... skipping 93 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:359
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":12,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:6.202 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:108
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 83 lines ...
test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  test/e2e/storage/csi_mock_volume.go:419
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    test/e2e/storage/csi_mock_volume.go:448
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":17,"skipped":103,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 30 lines ...
  test/e2e/common/runtime.go:38
    when starting a container that exits
    test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 66 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":17,"skipped":145,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:32:25.798: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename zone-support
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 107 lines ...
Nov 22 03:32:14.129: INFO: Deleting pod "pvc-volume-tester-x72hc" in namespace "csi-mock-volumes-6226"
Nov 22 03:32:14.141: INFO: Wait up to 5m0s for pod "pvc-volume-tester-x72hc" to be fully deleted
WARNING: pod log: pvc-volume-tester-x72hc/volume-tester: pods "pvc-volume-tester-x72hc" not found
STEP: Checking CSI driver logs
Nov 22 03:32:26.178: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6226","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6226","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-6226","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-6226","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"csi-455b2cc9a4fae26092115f53935be9a854fc950e6647d64f2d5abb1cdc1b4e6a","target_path":"/var/lib/kubelet/pods/9828ce37-d96b-46cf-8354-89d25b1333df/volumes/kubernetes.io~csi/my-volume/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"true","csi.storage.k8s.io/pod.name":"pvc-volume-tester-x72hc","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-6226","csi.storage.k8s.io/pod.uid":"9828ce37-d96b-46cf-8354-89d25b1333df","csi.storage.k8s.io/serviceAccount.name":"default"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-455b2cc9a4fae26092115f53935be9a854fc950e6647d64f2d5abb1cdc1b4e6a","target_path":"/var/lib/kubelet/pods/9828ce37-d96b-46cf-8354-89d25b1333df/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":""}

Nov 22 03:32:26.178: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Nov 22 03:32:26.179: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-x72hc
Nov 22 03:32:26.179: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-6226
Nov 22 03:32:26.179: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 9828ce37-d96b-46cf-8354-89d25b1333df
Nov 22 03:32:26.179: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
... skipping 38 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    contain ephemeral=true when using inline volume
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:26.811: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 15 lines ...
      Driver local doesn't support InlineVolume -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":10,"skipped":62,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:32:08.196: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 104 lines ...
test/e2e/framework/framework.go:629
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/security_context.go:289
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:328
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":13,"skipped":117,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:31.130: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 73 lines ...
• [SLOW TEST:10.170 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:629
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:31.568: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:32:31.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 88 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should function for endpoint-Service: http
    test/e2e/network/networking.go:199
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http","total":-1,"completed":15,"skipped":133,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:32.165: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:32:32.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 92 lines ...
• [SLOW TEST:12.140 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  test/e2e/apps/disruption.go:149
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":18,"skipped":108,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 66 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":14,"skipped":145,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:36.831: INFO: Driver gluster doesn't support ntfs -- skipping
... skipping 84 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:32:38.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8315" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":19,"skipped":113,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:38.274: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 134 lines ...
test/e2e/framework/framework.go:629
  [k8s.io] [sig-node] Clean up pods on node
  test/e2e/framework/framework.go:629
    kubelet should be able to delete 10 pods per node in 1m0s.
    test/e2e/node/kubelet.go:338
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":17,"skipped":88,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 22 03:32:06.006: INFO: File wheezy_udp@dns-test-service-3.dns-1547.svc.cluster.local from pod  dns-1547/dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 22 03:32:06.011: INFO: File jessie_udp@dns-test-service-3.dns-1547.svc.cluster.local from pod  dns-1547/dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 22 03:32:06.011: INFO: Lookups using dns-1547/dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 failed for: [wheezy_udp@dns-test-service-3.dns-1547.svc.cluster.local jessie_udp@dns-test-service-3.dns-1547.svc.cluster.local]

Nov 22 03:32:11.016: INFO: File wheezy_udp@dns-test-service-3.dns-1547.svc.cluster.local from pod  dns-1547/dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 22 03:32:11.020: INFO: File jessie_udp@dns-test-service-3.dns-1547.svc.cluster.local from pod  dns-1547/dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 22 03:32:11.020: INFO: Lookups using dns-1547/dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 failed for: [wheezy_udp@dns-test-service-3.dns-1547.svc.cluster.local jessie_udp@dns-test-service-3.dns-1547.svc.cluster.local]

Nov 22 03:32:16.021: INFO: DNS probes using dns-test-1618797e-4efa-4ef9-8290-2d00efcecc11 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1547.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1547.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
• [SLOW TEST:60.275 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":18,"skipped":120,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 50 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:100
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:333
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":14,"skipped":119,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:42.477: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 57 lines ...
• [SLOW TEST:17.076 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":18,"skipped":148,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 41 lines ...
test/e2e/framework/framework.go:629
  when create a pod with lifecycle hook
  test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":108,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:43.183: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
Nov 22 03:32:43.252: INFO: AfterEach: Cleaning up test resources


S [SKIPPING] in Spec Setup (BeforeEach) [0.054 seconds]
[sig-storage] PersistentVolumes:vsphere
test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach [BeforeEach]
  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:147

  Only supported for providers [vsphere] (not skeleton)

  test/e2e/storage/vsphere/persistent_volumes-vsphere.go:63
------------------------------
... skipping 65 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":79,"failed":0}

SSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":20,"skipped":192,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:31:52.740: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 45 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":21,"skipped":192,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:16.293 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:47.890: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:150
Nov 22 03:32:47.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 138 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:100
      should store data
      test/e2e/storage/testsuites/volumes.go:150
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:20.618 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":68,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 53 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should support exec through kubectl proxy
    test/e2e/kubectl/kubectl.go:598
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":14,"skipped":129,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 202 lines ...
• [SLOW TEST:10.174 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:33
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  test/e2e/common/downward_api.go:108
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":19,"skipped":149,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 59 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should support port-forward
    test/e2e/kubectl/kubectl.go:731
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":13,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 15 lines ...
STEP: Creating the service on top of the pods in kubernetes
Nov 22 03:32:28.255: INFO: Service node-port-service in namespace nettest-9675 found.
Nov 22 03:32:28.301: INFO: Service session-affinity-service in namespace nettest-9675 found.
STEP: dialing(udp) test-container-pod --> 10.96.246.144:90
Nov 22 03:32:28.310: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostName&protocol=udp&host=10.96.246.144&port=90&tries=1'] Namespace:nettest-9675 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:32:28.310: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:32:33.447: INFO: Tries: 10, in try: 0, stdout: {"errors":["reading from udp connection failed. err:'read udp 10.244.1.248:44254-\u003e10.96.246.144:90: i/o timeout'"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9675", SelfLink:"/api/v1/namespaces/nettest-9675/pods/host-test-container-pod", UID:"9fc8fc0c-ccec-4772-bcc1-af5555067ef4", ResourceVersion:"20611", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j5dws", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000933980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j5dws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00194da28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002090840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00194da70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00194da90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00194da98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00194da9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990333, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990333, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"172.17.0.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.3"}}, StartTime:(*v1.Time)(0xc001fdc360), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fdc380), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.8", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5", ContainerID:"containerd://d6cf048b28163070ec507f4dd30cc3047920c7d226a908644c2f55841d3ae35b", Started:(*bool)(0xc00194db17)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
Nov 22 03:32:35.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostName&protocol=udp&host=10.96.246.144&port=90&tries=1'] Namespace:nettest-9675 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:32:35.451: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:32:35.584: INFO: Tries: 10, in try: 1, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9675", SelfLink:"/api/v1/namespaces/nettest-9675/pods/host-test-container-pod", UID:"9fc8fc0c-ccec-4772-bcc1-af5555067ef4", ResourceVersion:"20611", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j5dws", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000933980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j5dws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00194da28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002090840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00194da70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00194da90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00194da98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00194da9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990333, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990333, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"172.17.0.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.3"}}, StartTime:(*v1.Time)(0xc001fdc360), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fdc380), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.8", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5", ContainerID:"containerd://d6cf048b28163070ec507f4dd30cc3047920c7d226a908644c2f55841d3ae35b", Started:(*bool)(0xc00194db17)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
Nov 22 03:32:37.593: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostName&protocol=udp&host=10.96.246.144&port=90&tries=1'] Namespace:nettest-9675 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 22 03:32:37.593: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Nov 22 03:32:37.806: INFO: Tries: 10, in try: 2, stdout: {"responses":["netserver-0"]}, stderr: , command run in: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"host-test-container-pod", GenerateName:"", Namespace:"nettest-9675", SelfLink:"/api/v1/namespaces/nettest-9675/pods/host-test-container-pod", UID:"9fc8fc0c-ccec-4772-bcc1-af5555067ef4", ResourceVersion:"20611", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j5dws", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000933980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"agnhost", Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j5dws", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00194da28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002090840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00194da70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00194da90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00194da98), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00194da9c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990333, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990333, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63709990332, loc:(*time.Location)(0x7ce3280)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"172.17.0.3", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.17.0.3"}}, StartTime:(*v1.Time)(0xc001fdc360), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"agnhost", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001fdc380), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"gcr.io/kubernetes-e2e-test-images/agnhost:2.8", ImageID:"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5", ContainerID:"containerd://d6cf048b28163070ec507f4dd30cc3047920c7d226a908644c2f55841d3ae35b", Started:(*bool)(0xc00194db17)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
... skipping 29 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should function for client IP based session affinity: udp [LinuxOnly]
    test/e2e/network/networking.go:282
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]","total":-1,"completed":18,"skipped":128,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:5.197 seconds]
[sig-storage] PV Protection
test/e2e/storage/utils/framework.go:23
  Verify that PV bound to a PVC is not removed immediately
  test/e2e/storage/pv_protection.go:106
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":15,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:55.798: INFO: Driver gluster doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:32:55.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 39 lines ...
• [SLOW TEST:16.139 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":19,"skipped":124,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:56.300: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 31 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:32:56.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7147" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":19,"skipped":130,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:32:56.662: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
... skipping 161 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      test/e2e/storage/testsuites/base.go:154
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":16,"skipped":140,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:32:52.055: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:6.229 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:629
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:102
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":17,"skipped":140,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:7.187 seconds]
[sig-scheduling] LimitRange
test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied.
  test/e2e/scheduling/limit_range.go:55
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":13,"skipped":121,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:32:52.139: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 37 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:374
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":14,"skipped":121,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 61 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:629
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":13,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:04.528: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 59 lines ...
• [SLOW TEST:49.241 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":8,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:05.907: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 66 lines ...
• [SLOW TEST:10.631 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  test/e2e/apimachinery/garbage_collector.go:1077
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":20,"skipped":129,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:06.951: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 48 lines ...
test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":18,"skipped":124,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:11.010: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 55 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should be able to handle large requests: udp
    test/e2e/network/networking.go:306
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":20,"skipped":129,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:13.079: INFO: Only supported for providers [aws] (not skeleton)
... skipping 151 lines ...
• [SLOW TEST:86.023 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly]
  test/e2e/network/service.go:1828
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly]","total":-1,"completed":16,"skipped":158,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 23 lines ...
test/e2e/framework/framework.go:629
  When creating a container with runAsUser
  test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":68,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 250 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Nov 22 03:33:15.402: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-6094 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Nov 22 03:33:16.309: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Nov 22 03:33:16.310: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-6094 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Nov 22 03:33:16.832: INFO: rc: 255
Nov 22 03:33:16.832: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-6094 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I1122 03:33:16.731651     209 merged_client_builder.go:164] Using in-cluster namespace
I1122 03:33:16.731959     209 merged_client_builder.go:122] Using in-cluster configuration
I1122 03:33:16.752393     209 merged_client_builder.go:122] Using in-cluster configuration
I1122 03:33:16.768214     209 merged_client_builder.go:122] Using in-cluster configuration
I1122 03:33:16.768578     209 round_trippers.go:420] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-6094/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F1122 03:33:16.779911     209 helpers.go:114] error: You must be logged in to the server (Unauthorized)

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Nov 22 03:33:16.832: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-6094 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Nov 22 03:33:17.462: INFO: rc: 255
Nov 22 03:33:17.462: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-6094 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I1122 03:33:17.361773     225 merged_client_builder.go:164] Using in-cluster namespace
I1122 03:33:17.384581     225 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 22 milliseconds
I1122 03:33:17.384659     225 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
I1122 03:33:17.393177     225 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 8 milliseconds
I1122 03:33:17.393243     225 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
I1122 03:33:17.393273     225 shortcut.go:89] Error loading discovery information: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
I1122 03:33:17.405705     225 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 12 milliseconds
I1122 03:33:17.405812     225 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
I1122 03:33:17.419832     225 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 13 milliseconds
I1122 03:33:17.419912     225 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
I1122 03:33:17.425740     225 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 5 milliseconds
I1122 03:33:17.425824     225 cached_discovery.go:121] skipped caching discovery info due to Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
I1122 03:33:17.425858     225 helpers.go:221] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
F1122 03:33:17.425889     225 helpers.go:114] Unable to connect to the server: dial tcp: lookup invalid on 10.96.0.10:53: no such host

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Nov 22 03:33:17.462: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-6094 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Nov 22 03:33:17.923: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Nov 22 03:33:17.923: INFO: stdout: "I1122 03:33:17.841015     238 merged_client_builder.go:122] Using in-cluster configuration\nI1122 03:33:17.845628     238 merged_client_builder.go:122] Using in-cluster configuration\nI1122 03:33:17.855537     238 merged_client_builder.go:122] Using in-cluster configuration\nI1122 03:33:17.868686     238 round_trippers.go:443] GET https://10.96.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 12 milliseconds\nNo resources found in invalid namespace.\n"
Nov 22 03:33:17.924: INFO: stdout: I1122 03:33:17.841015     238 merged_client_builder.go:122] Using in-cluster configuration
... skipping 69 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:504
    should handle in-cluster config
    test/e2e/kubectl/kubectl.go:748
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":18,"skipped":148,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:19.131: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:33:19.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 179 lines ...
Nov 22 03:33:19.774: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Nov 22 03:33:19.774: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config describe pod agnhost-master-9ct27 --namespace=kubectl-5946'
Nov 22 03:33:20.096: INFO: stderr: ""
Nov 22 03:33:20.096: INFO: stdout: "Name:         agnhost-master-9ct27\nNamespace:    kubectl-5946\nPriority:     0\nNode:         kind-worker/172.17.0.2\nStart Time:   Fri, 22 Nov 2019 03:33:13 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.1.28\nIPs:\n  IP:           10.244.1.28\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://21aa8b30abe2d2ca2c3a6de661321d563c9a88c9a4a6d9bb6901c02fdb356fed\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 22 Nov 2019 03:33:14 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nqdwv (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-nqdwv:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-nqdwv\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                  Message\n  ----    ------     ----  ----                  -------\n  Normal  Scheduled  7s    default-scheduler     Successfully assigned kubectl-5946/agnhost-master-9ct27 to kind-worker\n  Normal  Pulled     6s    kubelet, kind-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    6s    kubelet, kind-worker  Created container agnhost-master\n  Normal  Started    6s    kubelet, kind-worker  Started container agnhost-master\n"
Nov 22 03:33:20.096: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config describe rc agnhost-master --namespace=kubectl-5946'
Nov 22 03:33:20.275: INFO: stderr: ""
Nov 22 03:33:20.275: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5946\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-9ct27\n"
Nov 22 03:33:20.276: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config describe service agnhost-master --namespace=kubectl-5946'
Nov 22 03:33:20.459: INFO: stderr: ""
Nov 22 03:33:20.459: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5946\nLabels:            app=agnhost\n                   role=master\nAnnotations:       <none>\nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.15.125\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.28:6379\nSession Affinity:  None\nEvents:            <none>\n"
Nov 22 03:33:20.478: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
Nov 22 03:33:20.689: INFO: stderr: ""
Nov 22 03:33:20.689: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 22 Nov 2019 03:23:53 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 22 Nov 2019 03:33:18 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 22 Nov 2019 03:29:56 +0000   Fri, 22 Nov 2019 03:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 22 Nov 2019 03:29:56 +0000   Fri, 22 Nov 2019 03:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 22 Nov 2019 03:29:56 +0000   Fri, 22 Nov 2019 03:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 22 Nov 2019 03:29:56 +0000   Fri, 22 Nov 2019 03:24:56 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.4\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253696108Ki\n  hugepages-2Mi:      0\n  memory:             53588700Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253696108Ki\n  hugepages-2Mi:      0\n  memory:             53588700Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 a8a2ce0a225f40d584de3d30e4905dc8\n  System UUID:                ce38eddb-12ef-43ba-ab85-f38b37833c59\n  Boot ID:                    701660ff-2104-47a2-b0c0-023c3fc55e82\n  Kernel Version:             4.14.138+\n  OS Image:                   Ubuntu Eoan Ermine (development branch)\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.0-27-g54658b88\n  Kubelet Version:            v1.18.0-alpha.0.1116+94ec940998d730\n  Kube-Proxy Version:         v1.18.0-alpha.0.1116+94ec940998d730\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (8 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-mxkvk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m8s\n  kube-system                 coredns-6955765f44-v49tc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m8s\n  kube-system                 etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s\n  kube-system                 kindnet-lnv5z                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m8s\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m23s\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m23s\n  kube-system                 kube-proxy-vjhtv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m23s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (10%)  100m (1%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age    From                            Message\n  ----     ------                    ----   ----                            -------\n  Normal   Starting                  9m24s  kubelet, kind-control-plane     Starting kubelet.\n  Warning  CheckLimitsForResolvConf  9m24s  kubelet, kind-control-plane     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeHasSufficientMemory   9m24s  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     9m24s  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      9m24s  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   9m24s  kubelet, kind-control-plane     Updated Node Allocatable limit across pods\n  Normal   Starting                  9m1s   kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal   NodeReady                 8m24s  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeReady\n"
... skipping 11 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl describe
  test/e2e/kubectl/kubectl.go:1135
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":21,"skipped":134,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:20.898: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 68 lines ...
    Only supported for node OS distro [gci ubuntu custom] (not debian)

    test/e2e/common/volumes.go:65
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.","total":-1,"completed":14,"skipped":57,"failed":0}
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:33:01.037: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 34 lines ...
• [SLOW TEST:20.807 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":15,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:21.845: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:33:21.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 69 lines ...
• [SLOW TEST:8.213 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:629
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":71,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-instrumentation] Cadvisor
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:33:24.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cadvisor-7243" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Cadvisor should be healthy on every node.","total":-1,"completed":16,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:24.162: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:150
Nov 22 03:33:24.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 72 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:634
STEP: creating service endpoint-test2 in namespace services-1807
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1807 to expose endpoints map[]
Nov 22 03:32:55.974: INFO: Get endpoints failed (12.973127ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Nov 22 03:32:57.005: INFO: successfully validated that service endpoint-test2 in namespace services-1807 exposes endpoints map[] (1.044441247s elapsed)
STEP: Creating pod pod1 in namespace services-1807
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1807 to expose endpoints map[pod1:[80]]
Nov 22 03:33:01.067: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.051594809s elapsed, will retry)
Nov 22 03:33:06.115: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.099956436s elapsed, will retry)
Nov 22 03:33:10.185: INFO: successfully validated that service endpoint-test2 in namespace services-1807 exposes endpoints map[pod1:[80]] (13.170335892s elapsed)
... skipping 19 lines ...
• [SLOW TEST:30.019 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":16,"skipped":147,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:20.056 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":9,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:33:26.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2263" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":17,"skipped":158,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:26.118: INFO: Driver azure doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:33:26.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 20 lines ...
STEP: Creating a kubernetes client
Nov 22 03:33:26.123: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pod-disks
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  test/e2e/storage/pd.go:73
[It] should be able to delete a non-existent PD without error
  test/e2e/storage/pd.go:444
Nov 22 03:33:26.243: INFO: Only supported for providers [gce] (not skeleton)
[AfterEach] [sig-storage] Pod Disks
  test/e2e/framework/framework.go:150
Nov 22 03:33:26.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-4839" for this suite.


S [SKIPPING] [0.133 seconds]
[sig-storage] Pod Disks
test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [It]
  test/e2e/storage/pd.go:444

  Only supported for providers [gce] (not skeleton)

  test/e2e/storage/pd.go:445
------------------------------
... skipping 18 lines ...
      Driver local doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:159
------------------------------
SSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":16,"skipped":131,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:32:51.731: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
• [SLOW TEST:35.161 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  test/e2e/apimachinery/garbage_collector.go:437
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":17,"skipped":131,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:33:28.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6419" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":10,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 92 lines ...
• [SLOW TEST:43.150 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  test/e2e/network/service.go:306
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":10,"skipped":75,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 66 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:33:31.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4685" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":11,"skipped":94,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 58 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:203
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":21,"skipped":141,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 105 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should function for client IP based session affinity: http [LinuxOnly]
    test/e2e/network/networking.go:264
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","total":-1,"completed":5,"skipped":33,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 25 lines ...
test/e2e/framework/framework.go:629
  When creating a pod with readOnlyRootFilesystem
  test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":167,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 17 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:33:35.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8286" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":23,"skipped":175,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 69 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:37.504: INFO: Driver local doesn't support ext4 -- skipping
... skipping 15 lines ...
      Driver local doesn't support ext4 -- skipping

      test/e2e/storage/testsuites/base.go:159
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":11,"skipped":84,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:33:19.179: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 93 lines ...
  test/e2e/framework/framework.go:150
Nov 22 03:33:38.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4094" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":-1,"completed":24,"skipped":178,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":12,"skipped":84,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:38.117: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:150
Nov 22 03:33:38.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 89 lines ...
• [SLOW TEST:136.169 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  test/e2e/apps/cronjob.go:171
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":10,"skipped":71,"failed":0}
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:33:39.192: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename zone-support
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 61 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      test/e2e/storage/testsuites/base.go:697
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":16,"skipped":67,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:33:30.173: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:33:39.300: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  test/e2e/apps/job.go:133
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:150
Nov 22 03:33:41.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3375" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":11,"skipped":80,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:101
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 53 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:100
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:437
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":20,"skipped":165,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 92 lines ...
STEP: Destroying namespace "services-4291" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:143

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":21,"skipped":181,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 79 lines ...
  test/e2e/storage/persistent_volumes-local.go:186
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:242
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":193,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:43.567: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 48 lines ...
• [SLOW TEST:12.304 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:629
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":100,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:44.217: INFO: Only supported for providers [azure] (not skeleton)
... skipping 198 lines ...
test/e2e/storage/utils/framework.go:23
  [k8s.io] GlusterDynamicProvisioner
  test/e2e/framework/framework.go:629
    should create and delete persistent volumes [fast]
    test/e2e/storage/volume_provisioning.go:749
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":19,"skipped":150,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":12,"skipped":75,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
Nov 22 03:33:15.621: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 38 lines ...
• [SLOW TEST:31.850 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":13,"skipped":75,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:101
Nov 22 03:33:47.480: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 47 lines ...
STEP: Creating a kubernetes client
Nov 22 03:33:47.502: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  test/e2e/storage/volume_provisioning.go:136
[It] should report an error and create no PV
  test/e2e/storage/volume_provisioning.go:778
Nov 22 03:33:47.765: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [sig-storage] Dynamic Provisioning
  test/e2e/framework/framework.go:150
Nov 22 03:33:47.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-6708" for this suite.


S [SKIPPING] [0.629 seconds]
[sig-storage] Dynamic Provisioning
test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  test/e2e/storage/volume_provisioning.go:777
    should report an error and create no PV [It]
    test/e2e/storage/volume_provisioning.go:778

    Only supported for providers [aws] (not skeleton)

    test/e2e/storage/volume_provisioning.go:779
------------------------------
... skipping 44 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl run rc
  test/e2e/kubectl/kubectl.go:1609
    should create an rc from an image  [Conformance]
    test/e2e/framework/framework.go:634
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":-1,"completed":12,"skipped":88,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:149
STEP: Creating a kubernetes client
... skipping 4 lines ...
  test/e2e/kubectl/kubectl.go:278
[It] should check if cluster-info dump succeeds
  test/e2e/kubectl/kubectl.go:1129
STEP: running cluster-info dump
Nov 22 03:33:48.603: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:44227 --kubeconfig=/root/.kube/kind-test-config cluster-info dump'
Nov 22 03:33:51.130: INFO: stderr: ""
Nov 22 03:33:51.131: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/nodes\",\n        \"resourceVersion\": \"25035\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane\",\n                \"selfLink\": \"/api/v1/nodes/kind-control-plane\",\n                \"uid\": \"e0ef7c67-b911-4212-b362-d0c7fd48544c\",\n                \"resourceVersion\": \"12564\",\n                \"creationTimestamp\": \"2019-11-22T03:23:53Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-control-plane\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.0.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.0.0/24\"\n                ],\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:29:56Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:23:51Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:29:56Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:23:51Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:29:56Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:23:51Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:29:56Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:56Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.4\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-control-plane\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"a8a2ce0a225f40d584de3d30e4905dc8\",\n                    \"systemUUID\": \"ce38eddb-12ef-43ba-ab85-f38b37833c59\",\n                    \"bootID\": \"701660ff-2104-47a2-b0c0-023c3fc55e82\",\n                    \"kernelVersion\": \"4.14.138+\",\n                    \"osImage\": \"Ubuntu Eoan Ermine (development branch)\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.0-27-g54658b88\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.0.1116+94ec940998d730\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.0.1116+94ec940998d730\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 196742299\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 181699748\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 123679226\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 102544538\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\n                        ],\n                        \"sizeBytes\": 32397572\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker\",\n                \"uid\": \"aaedd29b-6fbc-4ad5-93ad-ce22a0fb08d3\",\n                \"resourceVersion\": \"23835\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"topology.hostpath.csi/node\": \"kind-worker\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-8744\\\":\\\"kind-worker\\\",\\\"csi-hostpath-ephemeral-9382\\\":\\\"kind-worker\\\",\\\"csi-mock-csi-mock-volumes-2960\\\":\\\"csi-mock-csi-mock-volumes-2960\\\",\\\"csi-mock-csi-mock-volumes-4256\\\":\\\"csi-mock-csi-mock-volumes-4256\\\",\\\"csi-mock-csi-mock-volumes-5111\\\":\\\"csi-mock-csi-mock-volumes-5111\\\",\\\"csi-mock-csi-mock-volumes-8072\\\":\\\"csi-mock-csi-mock-volumes-8072\\\",\\\"csi-mock-csi-mock-volumes-9179\\\":\\\"csi-mock-csi-mock-volumes-9179\\\",\\\"csi-mock-csi-mock-volumes-9408\\\":\\\"csi-mock-csi-mock-volumes-9408\\\"}\",\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.1.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.1.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:33:31Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:33:31Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:33:31Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:33:31Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:25:20Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.2\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"db4ab10e3ad4481d826da4975394df2f\",\n                    \"systemUUID\": \"ddf0aded-c173-43ad-a9ec-81b7c0c2d6ed\",\n                    \"bootID\": \"701660ff-2104-47a2-b0c0-023c3fc55e82\",\n                    \"kernelVersion\": \"4.14.138+\",\n                    \"osImage\": \"Ubuntu Eoan Ermine (development branch)\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.0-27-g54658b88\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.0.1116+94ec940998d730\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.0.1116+94ec940998d730\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 196742299\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 181699748\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 123679226\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 102544538\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/fedora@sha256:d4f7df6b691d61af6cee7328f82f1d8afdef63bc38f58516858ae3045083924a\",\n                            \"docker.io/library/fedora:latest\"\n                        ],\n                        \"sizeBytes\": 66777964\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n                            \"docker.io/library/httpd:2.4.39-alpine\"\n                        ],\n                        \"sizeBytes\": 41901429\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\n                        ],\n                        \"sizeBytes\": 32397572\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\n                        ],\n                        \"sizeBytes\": 20059276\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-provisioner:v1.5.0-rc1\"\n                        ],\n                        \"sizeBytes\": 19160190\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2\"\n                        ],\n                        \"sizeBytes\": 18500284\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\",\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n                        ],\n                        \"sizeBytes\": 17444032\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-attacher:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 17305293\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-resizer:v0.3.0\"\n                        ],\n                        \"sizeBytes\": 17263344\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-attacher:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 15527500\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-resizer:v0.1.0\"\n                        ],\n                        \"sizeBytes\": 15471809\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\n                        ],\n                        \"sizeBytes\": 13344760\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n                        ],\n                        \"sizeBytes\": 10198788\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 7676183\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/mock-driver:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 7377931\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                            \"docker.io/library/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6978806\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6939423\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6690548\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd\",\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\n                        ],\n                        \"sizeBytes\": 4331310\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n                            \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n                        ],\n                        \"sizeBytes\": 3054649\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n                        ],\n                        \"sizeBytes\": 1804628\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6\",\n                            \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\n                        ],\n                        \"sizeBytes\": 1799936\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n                        ],\n                        \"sizeBytes\": 599341\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\n                        ],\n                        \"sizeBytes\": 539309\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8072^4\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8072^4\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker2\",\n                \"uid\": \"f89b8c33-a85d-4795-ba9a-17fd31fec7e3\",\n                \"resourceVersion\": \"21209\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker2\",\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-mock-csi-mock-volumes-2478\\\":\\\"csi-mock-csi-mock-volumes-2478\\\",\\\"csi-mock-csi-mock-volumes-7722\\\":\\\"csi-mock-csi-mock-volumes-7722\\\"}\",\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.2.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.2.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253696108Ki\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53588700Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:31:52Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:31:52Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:31:52Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2019-11-22T03:31:52Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:25:21Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.3\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker2\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"fe3d2c32193b49c6acbdaaf0765a6e04\",\n                    \"systemUUID\": \"6a11fe99-5534-4f53-a8ad-d8e01aed0c25\",\n                    \"bootID\": \"701660ff-2104-47a2-b0c0-023c3fc55e82\",\n                    \"kernelVersion\": \"4.14.138+\",\n                    \"osImage\": \"Ubuntu Eoan Ermine (development branch)\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.0-27-g54658b88\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.0.1116+94ec940998d730\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.0.1116+94ec940998d730\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 196742299\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 181699748\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 123679226\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1116_94ec940998d730\"\n                        ],\n                        \"sizeBytes\": 102544538\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n                            \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n                        ],\n                        \"sizeBytes\": 85425365\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/fedora@sha256:d4f7df6b691d61af6cee7328f82f1d8afdef63bc38f58516858ae3045083924a\",\n                            \"docker.io/library/fedora:latest\"\n                        ],\n                        \"sizeBytes\": 66777964\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n                            \"docker.io/library/httpd:2.4.39-alpine\"\n                        ],\n                        \"sizeBytes\": 41901429\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\"\n                        ],\n                        \"sizeBytes\": 32397572\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-provisioner:v1.4.0-rc1\"\n                        ],\n                        \"sizeBytes\": 20059276\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-provisioner:v1.5.0-rc1\"\n                        ],\n                        \"sizeBytes\": 19160190\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-snapshotter:v2.0.0-rc2\"\n                        ],\n                        \"sizeBytes\": 18500284\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b\",\n                            \"gcr.io/kubernetes-e2e-test-images/nonroot:1.0\"\n                        ],\n                        \"sizeBytes\": 17747507\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\",\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n                        ],\n                        \"sizeBytes\": 17444032\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-attacher:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 17305293\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-resizer:v0.3.0\"\n                        ],\n                        \"sizeBytes\": 17263344\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-attacher:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 15527500\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-resizer:v0.1.0\"\n                        ],\n                        \"sizeBytes\": 15471809\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\n                        ],\n                        \"sizeBytes\": 13344760\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n                        ],\n                        \"sizeBytes\": 10198788\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 7676183\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/mock-driver:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 7377931\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                            \"docker.io/library/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6978806\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6939423\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6690548\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd\",\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\n                        ],\n                        \"sizeBytes\": 4331310\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n                            \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n                        ],\n                        \"sizeBytes\": 3054649\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n                        ],\n                        \"sizeBytes\": 1804628\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6\",\n                            \"gcr.io/kubernetes-e2e-test-images/kitten:1.0\"\n                        ],\n                        \"sizeBytes\": 1799936\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e\",\n                            \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\n                        ],\n                        \"sizeBytes\": 1791163\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0\",\n                            \"docker.io/library/busybox:latest\"\n                        ],\n                        \"sizeBytes\": 764944\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n                        ],\n                        \"sizeBytes\": 599341\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\n                        ],\n                        \"sizeBytes\": 539309\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/events\",\n        \"resourceVersion\": \"25035\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e23c1f5c0b6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e23c1f5c0b6\",\n                \"uid\": \"a2c38ef5-56e1-4a5e-b07c-1dcfd5729244\",\n                \"resourceVersion\": \"365\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"340\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 2,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e27febe6222\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e27febe6222\",\n                \"uid\": \"9d331385-ea77-4a2f-aa2a-90d04d9fbe3b\",\n                \"resourceVersion\": \"452\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"353\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e285824bf66\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e285824bf66\",\n                \"uid\": \"586f3702-774c-4805-b14a-d107ba1f9995\",\n                \"resourceVersion\": \"588\",\n                \"creationTimestamp\": \"2019-11-22T03:24:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"453\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:32Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:56Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e2f1861abf2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e2f1861abf2\",\n                \"uid\": \"94ff2fd3-2311-4d49-b7bc-06879f728414\",\n                \"resourceVersion\": \"601\",\n                \"creationTimestamp\": \"2019-11-22T03:25:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"493\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-6955765f44-mxkvk to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:01Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e309b09113a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e309b09113a\",\n                \"uid\": \"2dfe0b11-5a9d-49b6-b606-fe636d15ce69\",\n                \"resourceVersion\": \"611\",\n                \"creationTimestamp\": \"2019-11-22T03:25:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"600\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:07Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e30d2baf943\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e30d2baf943\",\n                \"uid\": \"f733f28c-687b-4147-a269-7e97e4ab487b\",\n                \"resourceVersion\": \"620\",\n                \"creationTimestamp\": \"2019-11-22T03:25:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"600\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk.15d95e30dbe95ae0\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-mxkvk.15d95e30dbe95ae0\",\n                \"uid\": \"4a1de847-852a-4375-9f7f-2783cad9ac1d\",\n                \"resourceVersion\": \"622\",\n                \"creationTimestamp\": \"2019-11-22T03:25:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"600\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e23c09d5144\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e23c09d5144\",\n                \"uid\": \"9fbe4fff-cb00-4c8c-9c50-174bbe6d73be\",\n                \"resourceVersion\": \"341\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"338\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e27fda49590\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e27fda49590\",\n                \"uid\": \"3f4aa901-0326-42bc-808b-b785ee28d5e1\",\n                \"resourceVersion\": \"446\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"348\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e28570c77a3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e28570c77a3\",\n                \"uid\": \"35cd203f-1f96-4b00-848e-7c1e6bca8bb1\",\n                \"resourceVersion\": \"586\",\n                \"creationTimestamp\": \"2019-11-22T03:24:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"447\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:32Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:56Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e2f18dcd1fc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e2f18dcd1fc\",\n                \"uid\": \"b0b1d68a-97f1-4074-bde6-26fc27eac9f4\",\n                \"resourceVersion\": \"602\",\n                \"creationTimestamp\": \"2019-11-22T03:25:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"491\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-6955765f44-v49tc to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:01Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e30abd3748b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e30abd3748b\",\n                \"uid\": \"b0ad39af-1861-4956-8e38-b0cc0cb2852e\",\n                \"resourceVersion\": \"617\",\n                \"creationTimestamp\": \"2019-11-22T03:25:07Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:07Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:07Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e30d24e499f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e30d24e499f\",\n                \"uid\": \"f6b3aa81-7d2b-4dfb-97db-3b34e05bcf8b\",\n                \"resourceVersion\": \"619\",\n                \"creationTimestamp\": \"2019-11-22T03:25:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc.15d95e30dbe09ffb\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-v49tc.15d95e30dbe09ffb\",\n                \"uid\": \"c0f2f21f-9e28-4346-8d10-4fb6b1ef95fe\",\n                \"resourceVersion\": \"621\",\n                \"creationTimestamp\": \"2019-11-22T03:25:08Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"599\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"lastTimestamp\": \"2019-11-22T03:25:08Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44.15d95e23c0880dd3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44.15d95e23c0880dd3\",\n                \"uid\": \"98fdc73b-ecbd-4874-b692-38353d6fdf50\",\n                \"resourceVersion\": \"342\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44\",\n                \"uid\": \"2bd55972-0d13-43fa-ac4c-5dd3d07a8b9d\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"333\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-6955765f44-v49tc\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44.15d95e23c0e8f3a2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44.15d95e23c0e8f3a2\",\n                \"uid\": \"791382c7-e76f-43ef-86be-521fe83beb73\",\n                \"resourceVersion\": \"349\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44\",\n                \"uid\": \"2bd55972-0d13-43fa-ac4c-5dd3d07a8b9d\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"333\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-6955765f44-mxkvk\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.15d95e23c01414f5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns.15d95e23c01414f5\",\n                \"uid\": \"f4e0e9ae-b4a3-4aa5-9c6a-0858d23640e9\",\n                \"resourceVersion\": \"336\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"d1a142b4-beb2-484e-bff6-b330c66cd98c\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"175\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-6955765f44 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-krxhw.15d95e27ffcc32c1\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-krxhw.15d95e27ffcc32c1\",\n                \"uid\": \"19f5bbe4-f218-4d6d-bf7a-3da30c1aa9e5\",\n                \"resourceVersion\": \"461\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-krxhw\",\n                \"uid\": \"214377ff-8ee6-49a4-b676-1b9e5584a1d3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"450\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-krxhw to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-krxhw.15d95e28277b6ecc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-krxhw.15d95e28277b6ecc\",\n                \"uid\": \"595ed965-5cc3-4ce1-844f-06c84b69e228\",\n                \"resourceVersion\": \"469\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-krxhw\",\n                \"uid\": \"214377ff-8ee6-49a4-b676-1b9e5584a1d3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"458\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-krxhw.15d95e2932751e83\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-krxhw.15d95e2932751e83\",\n                \"uid\": \"8de4464c-c4b2-402f-8c65-91588778d87d\",\n                \"resourceVersion\": \"515\",\n                \"creationTimestamp\": \"2019-11-22T03:24:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-krxhw\",\n                \"uid\": \"214377ff-8ee6-49a4-b676-1b9e5584a1d3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"458\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:35Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-krxhw.15d95e2957b349dc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-krxhw.15d95e2957b349dc\",\n                \"uid\": \"9cc3d1e7-25f3-4882-b2eb-976fb21cea62\",\n                \"resourceVersion\": \"517\",\n                \"creationTimestamp\": \"2019-11-22T03:24:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-krxhw\",\n                \"uid\": \"214377ff-8ee6-49a4-b676-1b9e5584a1d3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"458\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-krxhw.15d95e296c3bf694\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-krxhw.15d95e296c3bf694\",\n                \"uid\": \"63420623-f9a8-48fa-bed7-a650c2adef81\",\n                \"resourceVersion\": \"520\",\n                \"creationTimestamp\": \"2019-11-22T03:24:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-krxhw\",\n                \"uid\": \"214377ff-8ee6-49a4-b676-1b9e5584a1d3\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"458\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-lnv5z.15d95e23d51896f6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-lnv5z.15d95e23d51896f6\",\n                \"uid\": \"93664938-fc0b-4907-b406-6631212cc3fe\",\n                \"resourceVersion\": \"379\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-lnv5z\",\n                \"uid\": \"816bf19e-4fa9-46ef-946c-ce7648120dec\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"371\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-lnv5z to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-lnv5z.15d95e24006ebff6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-lnv5z.15d95e24006ebff6\",\n                \"uid\": \"a1fba062-08bc-43f6-825f-07c076d7efcf\",\n                \"resourceVersion\": \"390\",\n                \"creationTimestamp\": \"2019-11-22T03:24:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-lnv5z\",\n                \"uid\": \"816bf19e-4fa9-46ef-946c-ce7648120dec\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"377\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:13Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-lnv5z.15d95e2491335782\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-lnv5z.15d95e2491335782\",\n                \"uid\": \"800b9a10-fcf4-4da4-9cf2-f52f1a864980\",\n                \"resourceVersion\": \"403\",\n                \"creationTimestamp\": \"2019-11-22T03:24:15Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-lnv5z\",\n                \"uid\": \"816bf19e-4fa9-46ef-946c-ce7648120dec\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"377\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:15Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:15Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-lnv5z.15d95e249ab73b8d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-lnv5z.15d95e249ab73b8d\",\n                \"uid\": \"9aca1c62-cced-4b28-95b2-104eba08efef\",\n                \"resourceVersion\": \"404\",\n                \"creationTimestamp\": \"2019-11-22T03:24:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-lnv5z\",\n                \"uid\": \"816bf19e-4fa9-46ef-946c-ce7648120dec\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"377\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:16Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-lnv5z.15d95e24b07179c4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-lnv5z.15d95e24b07179c4\",\n                \"uid\": \"f34a1cbb-c189-4807-aaa8-8055d5804c52\",\n                \"resourceVersion\": \"405\",\n                \"creationTimestamp\": \"2019-11-22T03:24:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-lnv5z\",\n                \"uid\": \"816bf19e-4fa9-46ef-946c-ce7648120dec\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"377\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:16Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-rmvhf.15d95e283a56aeea\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-rmvhf.15d95e283a56aeea\",\n                \"uid\": \"46117cdd-d8c3-4711-97ed-6b76c286e10c\",\n                \"resourceVersion\": \"486\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-rmvhf\",\n                \"uid\": \"17382d95-4a62-4fd4-bfaa-f6305d842575\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"476\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-rmvhf to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-rmvhf.15d95e2862fbe96b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-rmvhf.15d95e2862fbe96b\",\n                \"uid\": \"c985f430-1917-49f0-ad02-2595435e3ee9\",\n                \"resourceVersion\": \"495\",\n                \"creationTimestamp\": \"2019-11-22T03:24:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-rmvhf\",\n                \"uid\": \"17382d95-4a62-4fd4-bfaa-f6305d842575\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"481\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulling\",\n            \"message\": \"Pulling image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:32Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-rmvhf.15d95e29566c4a37\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-rmvhf.15d95e29566c4a37\",\n                \"uid\": \"fc345b26-fe36-4976-b46b-457bb0da44ba\",\n                \"resourceVersion\": \"516\",\n                \"creationTimestamp\": \"2019-11-22T03:24:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-rmvhf\",\n                \"uid\": \"17382d95-4a62-4fd4-bfaa-f6305d842575\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"481\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Successfully pulled image \\\"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\\\"\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-rmvhf.15d95e295bd72d15\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-rmvhf.15d95e295bd72d15\",\n                \"uid\": \"06af71b9-8c66-4d95-981d-9d536e794dff\",\n                \"resourceVersion\": \"519\",\n                \"creationTimestamp\": \"2019-11-22T03:24:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-rmvhf\",\n                \"uid\": \"17382d95-4a62-4fd4-bfaa-f6305d842575\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"481\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-rmvhf.15d95e29716ff9a9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-rmvhf.15d95e29716ff9a9\",\n                \"uid\": \"b0b777e5-2dbd-4943-a1f6-93353760eacc\",\n                \"resourceVersion\": \"521\",\n                \"creationTimestamp\": \"2019-11-22T03:24:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-rmvhf\",\n                \"uid\": \"17382d95-4a62-4fd4-bfaa-f6305d842575\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"481\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15d95e23d3fc14a2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15d95e23d3fc14a2\",\n                \"uid\": \"ed4f049d-6921-4509-8925-845f304a47e3\",\n                \"resourceVersion\": \"372\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"239\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-lnv5z\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15d95e27fee09dbc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15d95e27fee09dbc\",\n                \"uid\": \"904e1237-d5ec-4782-9a1c-de2cb8cf306f\",\n                \"resourceVersion\": \"455\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"408\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-krxhw\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15d95e2839bb36dd\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15d95e2839bb36dd\",\n                \"uid\": \"b2df1dd3-f69f-4847-ad6e-08bd56abb6f7\",\n                \"resourceVersion\": \"479\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"454\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-rmvhf\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15d95e20285499d8\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15d95e20285499d8\",\n                \"uid\": \"f00f5ff8-a359-495e-a5bd-f0dd70bd5026\",\n                \"resourceVersion\": \"194\",\n                \"creationTimestamp\": \"2019-11-22T03:23:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"23b226bd-d769-43a4-b14a-483c8296911b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"192\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_cb68fe76-a826-4e72-81b7-ec45e90de04d became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:23:56Z\",\n            \"lastTimestamp\": \"2019-11-22T03:23:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15d95e202854c860\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15d95e202854c860\",\n                \"uid\": \"4e185720-3312-4d53-bb0b-72c3ddadf4ee\",\n                \"resourceVersion\": \"195\",\n                \"creationTimestamp\": \"2019-11-22T03:23:56Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"6ac04527-9044-475f-819b-24d122f538c3\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"193\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_cb68fe76-a826-4e72-81b7-ec45e90de04d became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:23:56Z\",\n            \"lastTimestamp\": \"2019-11-22T03:23:56Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-m22kv.15d95e28003b1ccc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-m22kv.15d95e28003b1ccc\",\n                \"uid\": \"1eb74ec9-3f7f-4dd4-8107-e42e260dc012\",\n                \"resourceVersion\": \"462\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-m22kv\",\n                \"uid\": \"35611aac-8044-4d6b-babe-443712ca7b89\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"451\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-m22kv to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-m22kv.15d95e281d3e2172\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-m22kv.15d95e281d3e2172\",\n                \"uid\": \"0d53e3d4-62f7-41ef-8239-3d507121fd63\",\n                \"resourceVersion\": \"466\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-m22kv\",\n                \"uid\": \"35611aac-8044-4d6b-babe-443712ca7b89\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"459\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-m22kv.15d95e288eeb0df3\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-m22kv.15d95e288eeb0df3\",\n                \"uid\": \"84a6890f-30b0-4917-a9ae-8a826f5ad4ed\",\n                \"resourceVersion\": \"498\",\n                \"creationTimestamp\": \"2019-11-22T03:24:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-m22kv\",\n                \"uid\": \"35611aac-8044-4d6b-babe-443712ca7b89\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"459\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:33Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-m22kv.15d95e2897a3128a\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-m22kv.15d95e2897a3128a\",\n                \"uid\": \"ed563c13-1c14-4923-acfe-84ca887daf64\",\n                \"resourceVersion\": \"499\",\n                \"creationTimestamp\": \"2019-11-22T03:24:33Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-m22kv\",\n                \"uid\": \"35611aac-8044-4d6b-babe-443712ca7b89\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"459\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:33Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:33Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-v8fsf.15d95e283a2d8eb6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-v8fsf.15d95e283a2d8eb6\",\n                \"uid\": \"cd6808b6-8678-461b-80f1-2e66fea0f779\",\n                \"resourceVersion\": \"483\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-v8fsf\",\n                \"uid\": \"b5799c19-4387-4a22-a896-02ad2670430d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"474\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-v8fsf to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-v8fsf.15d95e2858dad3ec\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-v8fsf.15d95e2858dad3ec\",\n                \"uid\": \"a4c94fe0-c917-4cb2-834e-8013e814558d\",\n                \"resourceVersion\": \"494\",\n                \"creationTimestamp\": \"2019-11-22T03:24:32Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-v8fsf\",\n                \"uid\": \"b5799c19-4387-4a22-a896-02ad2670430d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"480\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:32Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:32Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-v8fsf.15d95e28d08ce50b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-v8fsf.15d95e28d08ce50b\",\n                \"uid\": \"00410520-0e6d-447b-b03a-9a955c24a027\",\n                \"resourceVersion\": \"506\",\n                \"creationTimestamp\": \"2019-11-22T03:24:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-v8fsf\",\n                \"uid\": \"b5799c19-4387-4a22-a896-02ad2670430d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"480\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:34Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-v8fsf.15d95e28d95ffbd4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-v8fsf.15d95e28d95ffbd4\",\n                \"uid\": \"705f9616-7c00-4009-a0ba-d6289a0f7f04\",\n                \"resourceVersion\": \"507\",\n                \"creationTimestamp\": \"2019-11-22T03:24:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-v8fsf\",\n                \"uid\": \"b5799c19-4387-4a22-a896-02ad2670430d\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"480\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:34Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-vjhtv.15d95e23d604c806\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-vjhtv.15d95e23d604c806\",\n                \"uid\": \"430afb78-ea5e-4465-8c31-7c03a90e9bc6\",\n                \"resourceVersion\": \"380\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-vjhtv\",\n                \"uid\": \"89d64f12-eb32-4ac8-b9d6-7c369ca54e81\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"373\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-vjhtv to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-vjhtv.15d95e23f2da9b34\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-vjhtv.15d95e23f2da9b34\",\n                \"uid\": \"9ccf55eb-90a4-4bec-986c-d698653b7690\",\n                \"resourceVersion\": \"387\",\n                \"creationTimestamp\": \"2019-11-22T03:24:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-vjhtv\",\n                \"uid\": \"89d64f12-eb32-4ac8-b9d6-7c369ca54e81\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"378\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:13Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:13Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-vjhtv.15d95e2424f2af0f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-vjhtv.15d95e2424f2af0f\",\n                \"uid\": \"6d8dc393-95b2-4005-af82-fad35a04088c\",\n                \"resourceVersion\": \"391\",\n                \"creationTimestamp\": \"2019-11-22T03:24:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-vjhtv\",\n                \"uid\": \"89d64f12-eb32-4ac8-b9d6-7c369ca54e81\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"378\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:14Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-vjhtv.15d95e242b6d893d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-vjhtv.15d95e242b6d893d\",\n                \"uid\": \"2a3ef865-2610-4c7e-a973-ec1a6ccae66b\",\n                \"resourceVersion\": \"392\",\n                \"creationTimestamp\": \"2019-11-22T03:24:14Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-vjhtv\",\n                \"uid\": \"89d64f12-eb32-4ac8-b9d6-7c369ca54e81\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"378\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:14Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:14Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15d95e23d48b7f28\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15d95e23d48b7f28\",\n                \"uid\": \"52c477e1-712a-4d6e-b9dc-1507ebcb7c37\",\n                \"resourceVersion\": \"375\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"183\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-vjhtv\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15d95e27ff1a78f8\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15d95e27ff1a78f8\",\n                \"uid\": \"c6fd3d50-e66e-4b36-b057-8ea685aa88e7\",\n                \"resourceVersion\": \"457\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"394\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-m22kv\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:30Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15d95e2839bc9d2d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15d95e2839bc9d2d\",\n                \"uid\": \"15ca022e-e4be-4925-b996-c6575f98fc9b\",\n                \"resourceVersion\": \"484\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"460\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-v8fsf\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"lastTimestamp\": \"2019-11-22T03:24:31Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15d95e1fbfd509a5\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15d95e1fbfd509a5\",\n                \"uid\": \"7e6e30b3-f370-4a8a-a375-87886c5629fc\",\n                \"resourceVersion\": \"160\",\n                \"creationTimestamp\": \"2019-11-22T03:23:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"76add927-c304-42c4-9e0c-a56e43e22497\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"157\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_124e1804-3c2f-4bce-88f9-cec3a614addb became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:23:55Z\",\n            \"lastTimestamp\": \"2019-11-22T03:23:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15d95e1fbfd53ed4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15d95e1fbfd53ed4\",\n                \"uid\": \"4ec6a4be-32bd-4ff4-8418-b24afca2a592\",\n                \"resourceVersion\": \"159\",\n                \"creationTimestamp\": \"2019-11-22T03:23:55Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"9049c62f-4460-43df-bd50-e500e8f470e2\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"158\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_124e1804-3c2f-4bce-88f9-cec3a614addb became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2019-11-22T03:23:55Z\",\n            \"lastTimestamp\": \"2019-11-22T03:23:55Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/replicationcontrollers\",\n        \"resourceVersion\": \"25039\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/services\",\n        \"resourceVersion\": \"25044\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-dns\",\n                \"uid\": \"c289a028-8716-46a9-9acb-6d3bc19c0eee\",\n                \"resourceVersion\": \"177\",\n                \"creationTimestamp\": \"2019-11-22T03:23:56Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"KubeDNS\"\n                },\n                \"annotations\": {\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"10.96.0.10\",\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets\",\n        \"resourceVersion\": \"25045\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kindnet\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\",\n                \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                \"resourceVersion\": \"525\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-11-22T03:23:58Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"k8s-app\": \"kindnet\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"kindnet\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"kindnet\",\n                            \"k8s-app\": \"kindnet\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kindnet-cni\",\n                                \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                                \"env\": [\n                                    {\n                                        \"name\": \"HOST_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.hostIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.podIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_SUBNET\",\n                                        \"value\": \"10.244.0.0/16\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni-cfg\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_RAW\",\n                                            \"NET_ADMIN\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"kindnet\",\n                        \"serviceAccount\": \"kindnet\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ]\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\",\n                \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                \"resourceVersion\": \"510\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-11-22T03:23:56Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-proxy\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-proxy\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"configMap\": {\n                                    \"name\": \"kube-proxy\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                                \"command\": [\n                                    \"/usr/local/bin/kube-proxy\",\n                                    \"--config=/var/lib/kube-proxy/config.conf\",\n                                    \"--hostname-override=$(NODE_NAME)\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kube-proxy\",\n                                        \"mountPath\": \"/var/lib/kube-proxy\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"kube-proxy\",\n                        \"serviceAccount\": \"kube-proxy\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments\",\n        \"resourceVersion\": \"25045\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/coredns\",\n                \"uid\": \"d1a142b4-beb2-484e-bff6-b330c66cd98c\",\n                \"resourceVersion\": \"652\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-11-22T03:23:56Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2019-11-22T03:25:12Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:25:12Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2019-11-22T03:25:17Z\",\n                        \"lastTransitionTime\": \"2019-11-22T03:24:12Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-6955765f44\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets\",\n        \"resourceVersion\": \"25047\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/coredns-6955765f44\",\n                \"uid\": \"2bd55972-0d13-43fa-ac4c-5dd3d07a8b9d\",\n                \"resourceVersion\": \"651\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"d1a142b4-beb2-484e-bff6-b330c66cd98c\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"6955765f44\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"6955765f44\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/pods\",\n        \"resourceVersion\": \"25048\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-mxkvk\",\n                \"generateName\": \"coredns-6955765f44-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-6955765f44-mxkvk\",\n                \"uid\": \"7549aba8-0f69-4752-889b-e285e588758b\",\n                \"resourceVersion\": \"636\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-6955765f44\",\n                        \"uid\": \"2bd55972-0d13-43fa-ac4c-5dd3d07a8b9d\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"coredns-token-9v87w\",\n                        \"secret\": {\n                            \"secretName\": \"coredns-token-9v87w\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"coredns-token-9v87w\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-control-plane\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:12Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:12Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"10.244.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"10.244.0.2\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:25:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:25:08Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"imageID\": \"sha256:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61\",\n                        \"containerID\": \"containerd://9d03f2d07063dd51e360cfe61a396ff957f1c65510d68137b5dbed034b0e4430\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-v49tc\",\n                \"generateName\": \"coredns-6955765f44-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-6955765f44-v49tc\",\n                \"uid\": \"bb0fc871-0d36-43b6-b69c-cd85dff6027f\",\n                \"resourceVersion\": \"649\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-6955765f44\",\n                        \"uid\": \"2bd55972-0d13-43fa-ac4c-5dd3d07a8b9d\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"coredns-token-9v87w\",\n                        \"secret\": {\n                            \"secretName\": \"coredns-token-9v87w\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"coredns-token-9v87w\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-control-plane\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:17Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:17Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:25:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"10.244.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"10.244.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:25:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:25:08Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"imageID\": \"sha256:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61\",\n                        \"containerID\": \"containerd://e339b3049229a7b3b5491c468e80a2fcb1a42ceeb4bfccb1f03ab8c1725de8c6\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/etcd-kind-control-plane\",\n                \"uid\": \"e327a417-f35f-4b35-8398-1f7a85844094\",\n                \"resourceVersion\": \"226\",\n                \"creationTimestamp\": \"2019-11-22T03:23:57Z\",\n                \"labels\": {\n                    \"component\": \"etcd\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"f2a91d66e944b807e91ce3f5623b9f1e\",\n                    \"kubernetes.io/config.mirror\": \"f2a91d66e944b807e91ce3f5623b9f1e\",\n                    \"kubernetes.io/config.seen\": \"2019-11-22T03:23:56.41676803Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"e0ef7c67-b911-4212-b362-d0c7fd48544c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"etcd-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcd-data\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n                        \"command\": [\n                            \"etcd\",\n                            \"--advertise-client-urls=https://172.17.0.4:2379\",\n                            \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n                            \"--client-cert-auth=true\",\n                            \"--data-dir=/var/lib/etcd\",\n                            \"--initial-advertise-peer-urls=https://172.17.0.4:2380\",\n                            \"--initial-cluster=kind-control-plane=https://172.17.0.4:2380\",\n                            \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n                            \"--listen-client-urls=https://127.0.0.1:2379,https://172.17.0.4:2379\",\n                            \"--listen-metrics-urls=http://127.0.0.1:2381\",\n                            \"--listen-peer-urls=https://172.17.0.4:2380\",\n                            \"--name=kind-control-plane\",\n                            \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n                            \"--peer-client-cert-auth=true\",\n                            \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n                            \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--snapshot-count=10000\",\n                            \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"etcd-data\",\n                                \"mountPath\": \"/var/lib/etcd\"\n                            },\n                            {\n                                \"name\": \"etcd-certs\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 2381,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:23:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:23:48Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n                        \"imageID\": \"sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f\",\n                        \"containerID\": \"containerd://6524cf252b8f2649a8bd7c3d822103e96b102e1b2fa5a834bdda705dfa6f6811\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-krxhw\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-krxhw\",\n                \"uid\": \"214377ff-8ee6-49a4-b676-1b9e5584a1d3\",\n                \"resourceVersion\": \"522\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"775d694485\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-hrws2\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-hrws2\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-hrws2\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:37Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:37Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:24:30Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:24:36Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n                        \"imageID\": \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                        \"containerID\": \"containerd://457e7eb51be16bcf8bcb7680fd913b6afdf93b64ce0b960c6689c3350d0e2909\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-lnv5z\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-lnv5z\",\n                \"uid\": \"816bf19e-4fa9-46ef-946c-ce7648120dec\",\n                \"resourceVersion\": \"407\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"775d694485\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-hrws2\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-hrws2\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-hrws2\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:12Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:16Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:16Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:12Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:24:12Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:24:16Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n                        \"imageID\": \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                        \"containerID\": \"containerd://a13e881a9e72d22e866a73647e0152c344c9c3a526afc70efb0022e3cd723bba\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-rmvhf\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-rmvhf\",\n                \"uid\": \"17382d95-4a62-4fd4-bfaa-f6305d842575\",\n                \"resourceVersion\": \"524\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"775d694485\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"4d8485ee-8e7b-4f06-932a-ef1d60e6b1de\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-hrws2\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-hrws2\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.3@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-hrws2\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:37Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:37Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:24:31Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:24:36Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"sha256:aa67fec7d7ef71445da9a84e9bc88afca2538e9a0aebcba6ef9509b7cf313d17\",\n                        \"imageID\": \"docker.io/kindest/kindnetd@sha256:bc1833b3da442bb639008dd5a62861a0419d3f64b58fce6fb38b749105232555\",\n                        \"containerID\": \"containerd://ff26dbbc07d0af0c1ccaf093c0fd791c7b8a9cd8c18587f3c2cd46e203f26eea\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane\",\n                \"uid\": \"2981411a-7550-488c-962e-18f56c513d47\",\n                \"resourceVersion\": \"250\",\n                \"creationTimestamp\": \"2019-11-22T03:23:57Z\",\n                \"labels\": {\n                    \"component\": \"kube-apiserver\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"cba17914d9ae1f3e11cf2c5678398c42\",\n                    \"kubernetes.io/config.mirror\": \"cba17914d9ae1f3e11cf2c5678398c42\",\n                    \"kubernetes.io/config.seen\": \"2019-11-22T03:23:56.416746595Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"e0ef7c67-b911-4212-b362-d0c7fd48544c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"command\": [\n                            \"kube-apiserver\",\n                            \"--advertise-address=172.17.0.4\",\n                            \"--allow-privileged=true\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--enable-admission-plugins=NodeRestriction\",\n                            \"--enable-bootstrap-token-auth=true\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n                            \"--etcd-servers=https://127.0.0.1:2379\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n                            \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n                            \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n                            \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n                            \"--requestheader-allowed-names=front-proxy-client\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=6443\",\n                            \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n                            \"--service-cluster-ip-range=10.96.0.0/12\",\n                            \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n                            \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"250m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 6443,\n                                \"host\": \"172.17.0.4\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:23:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:23:48Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"imageID\": \"sha256:99d6109cc16bfb427133f7001b85f6d383d7ff7a2e7cce268bd581afaf2bbc4e\",\n                        \"containerID\": \"containerd://7ad0ca3b8c8b4ac127d4863b58f45b4809dc35ac36f9499fa0bca19bf50c79da\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-controller-manager-kind-control-plane\",\n                \"uid\": \"af733a89-3a16-48dc-a021-cc3957ad3eec\",\n                \"resourceVersion\": \"266\",\n                \"creationTimestamp\": \"2019-11-22T03:23:57Z\",\n                \"labels\": {\n                    \"component\": \"kube-controller-manager\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"ff0f949b32288a6bc6b68d5bd5d21de4\",\n                    \"kubernetes.io/config.mirror\": \"ff0f949b32288a6bc6b68d5bd5d21de4\",\n                    \"kubernetes.io/config.seen\": \"2019-11-22T03:23:56.416756387Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"e0ef7c67-b911-4212-b362-d0c7fd48544c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"flexvolume-dir\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/controller-manager.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"command\": [\n                            \"kube-controller-manager\",\n                            \"--allocate-node-cidrs=true\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--bind-address=127.0.0.1\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-cidr=10.244.0.0/16\",\n                            \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n                            \"--controllers=*,bootstrapsigner,tokencleaner\",\n                            \"--enable-hostpath-provisioner=true\",\n                            \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--leader-elect=true\",\n                            \"--node-cidr-mask-size=24\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n                            \"--service-cluster-ip-range=10.96.0.0/12\",\n                            \"--use-service-account-credentials=true\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"flexvolume-dir\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:23:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:23:48Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"imageID\": \"sha256:9f65776e23369e1eebc44f1afa82784a1f08e9c8177f9a300dba6b5bfce87bf5\",\n                        \"containerID\": \"containerd://079f3ee23af6644bdf61a078c1ee5ec3e5c254f3dcf4194fa6e8ede68864f8ab\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-m22kv\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-m22kv\",\n                \"uid\": \"35611aac-8044-4d6b-babe-443712ca7b89\",\n                \"resourceVersion\": \"504\",\n                \"creationTimestamp\": \"2019-11-22T03:24:30Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"85bc8b6896\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-m52zp\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-m52zp\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-m52zp\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:34Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:34Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:30Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:24:30Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:24:33Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"imageID\": \"sha256:91babd1ca74b18913c767badc6fe739e592e7a210f55c45a3d077cc66ae4ded2\",\n                        \"containerID\": \"containerd://8d092648549d5cd5f41893a55ed9d387c0bcbb0ffeb38ce05725dcb20a867e63\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-v8fsf\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-v8fsf\",\n                \"uid\": \"b5799c19-4387-4a22-a896-02ad2670430d\",\n                \"resourceVersion\": \"509\",\n                \"creationTimestamp\": \"2019-11-22T03:24:31Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"85bc8b6896\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-m52zp\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-m52zp\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-m52zp\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:35Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:35Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:31Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:24:31Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:24:34Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"imageID\": \"sha256:91babd1ca74b18913c767badc6fe739e592e7a210f55c45a3d077cc66ae4ded2\",\n                        \"containerID\": \"containerd://c17366c46f6cfa36d3187833ff7448c2c141d3e1c3289bc9b24d3a8e60624645\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-vjhtv\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-vjhtv\",\n                \"uid\": \"89d64f12-eb32-4ac8-b9d6-7c369ca54e81\",\n                \"resourceVersion\": \"393\",\n                \"creationTimestamp\": \"2019-11-22T03:24:12Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"85bc8b6896\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"cb9dfaaa-9bec-4422-ad67-55d6f56a5fb3\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-m52zp\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-m52zp\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-m52zp\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:12Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:14Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:14Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:24:12Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:24:12Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:24:14Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"imageID\": \"sha256:91babd1ca74b18913c767badc6fe739e592e7a210f55c45a3d077cc66ae4ded2\",\n                        \"containerID\": \"containerd://00aa0fa40ae82832a073dc3f9a1455ac35979669724e00be3ba8b80bb276d2fb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-scheduler-kind-control-plane\",\n                \"uid\": \"d16f5bd4-a4be-4a8b-a8b7-02bd6ac5d4a6\",\n                \"resourceVersion\": \"225\",\n                \"creationTimestamp\": \"2019-11-22T03:23:57Z\",\n                \"labels\": {\n                    \"component\": \"kube-scheduler\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"0c8b8293a242a2fe11ecf6767a15848f\",\n                    \"kubernetes.io/config.mirror\": \"0c8b8293a242a2fe11ecf6767a15848f\",\n                    \"kubernetes.io/config.seen\": \"2019-11-22T03:23:56.41675944Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"e0ef7c67-b911-4212-b362-d0c7fd48544c\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/scheduler.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"command\": [\n                            \"kube-scheduler\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--bind-address=127.0.0.1\",\n                            \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--leader-elect=true\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10259,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2019-11-22T03:23:57Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2019-11-22T03:23:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2019-11-22T03:23:48Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.0.1116_94ec940998d730\",\n                        \"imageID\": \"sha256:397b404f5d67504888595e7867cdc7ef0965e504f50b5405603cc82f9f4998f3\",\n                        \"containerID\": \"containerd://b9ae38dae243eb75b7a84e923e34dcb9df3a0fb012fd4ece20f552180107439e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-6955765f44-mxkvk ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7\nCoreDNS-1.6.5\nlinux/amd64, go1.13.4, c2fd1b2\n==== END logs for container coredns of pod kube-system/coredns-6955765f44-mxkvk ====\n==== START logs for container coredns of pod kube-system/coredns-6955765f44-v49tc ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7\nCoreDNS-1.6.5\nlinux/amd64, go1.13.4, c2fd1b2\n==== END logs for container coredns of pod kube-system/coredns-6955765f44-v49tc ====\n==== START logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2019-11-22 03:23:48.497758 I | etcdmain: etcd Version: 3.4.3\n2019-11-22 03:23:48.497901 I | etcdmain: Git SHA: 3cf2f69b5\n2019-11-22 03:23:48.497907 I | etcdmain: Go Version: go1.12.12\n2019-11-22 03:23:48.497913 I | etcdmain: Go OS/Arch: linux/amd64\n2019-11-22 03:23:48.497920 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2019-11-22 03:23:48.498138 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2019-11-22 03:23:48.499153 I | embed: name = kind-control-plane\n2019-11-22 03:23:48.499166 I | embed: data dir = /var/lib/etcd\n2019-11-22 03:23:48.499171 I | embed: member dir = /var/lib/etcd/member\n2019-11-22 03:23:48.499177 I | embed: heartbeat = 100ms\n2019-11-22 03:23:48.499182 I | embed: election = 1000ms\n2019-11-22 03:23:48.499186 I | embed: snapshot count = 10000\n2019-11-22 03:23:48.499196 I | embed: advertise client URLs = https://172.17.0.4:2379\n2019-11-22 03:23:48.554475 I | etcdserver: starting member 40fd14fa28910cab in cluster a6ea9ad1b116d02f\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab switched to configuration voters=()\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab became follower at term 0\nraft2019/11/22 03:23:48 INFO: newRaft 40fd14fa28910cab [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab became follower at term 1\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab switched to configuration voters=(4682922252190157995)\n2019-11-22 03:23:48.568857 W | auth: simple token is not cryptographically signed\n2019-11-22 03:23:48.576168 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]\n2019-11-22 03:23:48.577559 I | etcdserver: 40fd14fa28910cab as single-node; fast-forwarding 9 ticks (election ticks 10)\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab switched to configuration voters=(4682922252190157995)\n2019-11-22 03:23:48.578359 I | etcdserver/membership: added member 40fd14fa28910cab [https://172.17.0.4:2380] to cluster a6ea9ad1b116d02f\n2019-11-22 03:23:48.580521 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2019-11-22 03:23:48.580613 I | embed: listening for peers on 172.17.0.4:2380\n2019-11-22 03:23:48.580748 I | embed: listening for metrics on http://127.0.0.1:2381\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab is starting a new election at term 1\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab became candidate at term 2\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab received MsgVoteResp from 40fd14fa28910cab at term 2\nraft2019/11/22 03:23:48 INFO: 40fd14fa28910cab became leader at term 2\nraft2019/11/22 03:23:48 INFO: raft.node: 40fd14fa28910cab elected leader 40fd14fa28910cab at term 2\n2019-11-22 03:23:48.757906 I | etcdserver: setting up the initial cluster version to 3.4\n2019-11-22 03:23:48.759567 N | etcdserver/membership: set the initial cluster version to 3.4\n2019-11-22 03:23:48.759639 I | etcdserver/api: enabled capabilities for version 3.4\n2019-11-22 03:23:48.759728 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.4:2379]} to cluster a6ea9ad1b116d02f\n2019-11-22 03:23:48.759745 I | embed: ready to serve client requests\n2019-11-22 03:23:48.760047 I | embed: ready to serve client requests\n2019-11-22 03:23:48.763573 I | embed: serving client requests on 172.17.0.4:2379\n2019-11-22 03:23:48.765289 I | embed: serving client requests on 127.0.0.1:2379\n2019-11-22 03:25:01.423543 W | etcdserver: request \"header:<ID:912944919111258672 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/events/kube-system/coredns-6955765f44-v49tc.15d95e2f18dcd1fc\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/events/kube-system/coredns-6955765f44-v49tc.15d95e2f18dcd1fc\\\" value_size:389 lease:912944919111258627 >> failure:<>>\" with result \"size:16\" took too long (120.960274ms) to execute\n2019-11-22 03:25:01.542909 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kube-system/coredns-6955765f44-v49tc\\\" \" with result \"range_response_count:1 size:1363\" took too long (288.295309ms) to execute\n2019-11-22 03:25:02.674056 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (335.99744ms) to execute\n2019-11-22 03:25:03.320309 W | etcdserver: request \"header:<ID:912944919111258684 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" mod_revision:595 > success:<request_put:<key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" value_size:371 >> failure:<request_range:<key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" > >>\" with result \"size:16\" took too long (178.100035ms) to execute\n2019-11-22 03:25:03.504155 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/\\\" range_end:\\\"/registry/cronjobs0\\\" limit:500 \" with result \"range_response_count:0 size:5\" took too long (520.579281ms) to execute\n2019-11-22 03:25:03.645122 W | etcdserver: read-only range request \"key:\\\"/registry/jobs\\\" range_end:\\\"/registry/jobt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (467.825263ms) to execute\n2019-11-22 03:25:03.645603 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kube-system/coredns-6955765f44-mxkvk\\\" \" with result \"range_response_count:1 size:1363\" took too long (653.438958ms) to execute\n2019-11-22 03:25:03.679235 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:291\" took too long (408.86646ms) to execute\n2019-11-22 03:25:03.767999 W | etcdserver: read-only range request \"key:\\\"/registry/volumeattachments\\\" range_end:\\\"/registry/volumeattachmentt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (288.405578ms) to execute\n2019-11-22 03:25:04.489183 W | etcdserver: request \"header:<ID:912944919111258689 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-system/kube-scheduler\\\" mod_revision:598 > success:<request_put:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" value_size:225 >> failure:<request_range:<key:\\\"/registry/leases/kube-system/kube-scheduler\\\" > >>\" with result \"size:16\" took too long (162.473695ms) to execute\n2019-11-22 03:25:04.489375 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (580.919477ms) to execute\n2019-11-22 03:25:04.958535 W | etcdserver: read-only range request \"key:\\\"/registry/controllers\\\" range_end:\\\"/registry/controllert\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (640.817903ms) to execute\n2019-11-22 03:25:05.884647 W | etcdserver: read-only range request \"key:\\\"/registry/csidrivers\\\" range_end:\\\"/registry/csidrivert\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (1.588004989s) to execute\n2019-11-22 03:25:06.012119 W | wal: sync duration of 1.056869866s, expected less than 1s\n2019-11-22 03:25:06.053992 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/default\\\" \" with result \"range_response_count:1 size:172\" took too long (1.081596205s) to execute\n2019-11-22 03:25:07.633228 W | etcdserver: read-only range request \"key:\\\"/registry/validatingwebhookconfigurations\\\" range_end:\\\"/registry/validatingwebhookconfigurationt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (796.733335ms) to execute\n2019-11-22 03:25:07.633635 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6352\" took too long (892.484366ms) to execute\n2019-11-22 03:25:07.635494 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6352\" took too long (598.131817ms) to execute\n2019-11-22 03:25:07.636294 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:458\" took too long (204.238532ms) to execute\n2019-11-22 03:25:07.636505 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" \" with result \"range_response_count:3 size:6352\" took too long (532.359723ms) to execute\n2019-11-22 03:25:07.645123 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:440\" took too long (221.405074ms) to execute\n2019-11-22 03:25:07.651062 W | etcdserver: request \"header:<ID:912944919111258708 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/leases/kube-node-lease/kind-control-plane\\\" mod_revision:583 > success:<request_put:<key:\\\"/registry/leases/kube-node-lease/kind-control-plane\\\" value_size:255 >> failure:<request_range:<key:\\\"/registry/leases/kube-node-lease/kind-control-plane\\\" > >>\" with result \"size:16\" took too long (212.946161ms) to execute\n2019-11-22 03:25:41.883384 W | etcdserver: request \"header:<ID:912944919111259532 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/serviceaccounts/provisioning-8190/default\\\" mod_revision:1013 > success:<request_put:<key:\\\"/registry/serviceaccounts/provisioning-8190/default\\\" value_size:155 >> failure:<request_range:<key:\\\"/registry/serviceaccounts/provisioning-8190/default\\\" > >>\" with result \"size:16\" took too long (103.776655ms) to execute\n2019-11-22 03:25:41.883703 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-1542/hostexec-kind-worker-gb9fq\\\" \" with result \"range_response_count:1 size:1214\" took too long (209.302912ms) to execute\n2019-11-22 03:25:41.884135 W | etcdserver: read-only range request \"key:\\\"/registry/leases\\\" range_end:\\\"/registry/leaset\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (190.267653ms) to execute\n2019-11-22 03:27:24.577731 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/cronjob-9139/concurrent\\\" \" with result \"range_response_count:1 size:597\" took too long (103.986487ms) to execute\n2019-11-22 03:27:24.845372 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/projected-6960\\\" \" with result \"range_response_count:1 size:281\" took too long (187.025682ms) to execute\n2019-11-22 03:27:27.192011 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-8660/webserver-deployment-595b5b9587.15d95e4dab131918\\\" \" with result \"range_response_count:1 size:551\" took too long (105.944792ms) to execute\n2019-11-22 03:27:27.192195 W | etcdserver: read-only range request \"key:\\\"/registry/networkpolicies/downward-api-2576/\\\" range_end:\\\"/registry/networkpolicies/downward-api-25760\\\" \" with result \"range_response_count:0 size:5\" took too long (106.51026ms) to execute\n2019-11-22 03:27:27.192395 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/deployment-1681/default\\\" \" with result \"range_response_count:1 size:225\" took too long (107.381686ms) to execute\n2019-11-22 03:27:27.192575 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/deployment-8660/webserver-deployment-595b5b9587\\\" \" with result \"range_response_count:1 size:822\" took too long (108.323941ms) to execute\n2019-11-22 03:27:27.192729 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/deployment-1681/\\\" range_end:\\\"/registry/secrets/deployment-16810\\\" \" with result \"range_response_count:0 size:5\" took too long (113.596007ms) to execute\n2019-11-22 03:27:27.193989 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-1681/test-recreate-deployment-5f94c574ff-pw768.15d95e51066ae5ed\\\" \" with result \"range_response_count:0 size:5\" took too long (121.575087ms) to execute\n2019-11-22 03:27:27.825229 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pods-1625/server-envvars-6cb166b1-6553-4c36-b7d3-228afe4a6372\\\" \" with result \"range_response_count:1 size:1276\" took too long (100.468341ms) to execute\n2019-11-22 03:27:27.825533 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/deployment-1681/\\\" range_end:\\\"/registry/cronjobs/deployment-16810\\\" \" with result \"range_response_count:0 size:5\" took too long (100.92135ms) to execute\n2019-11-22 03:27:27.830429 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/pv-2534/\\\" range_end:\\\"/registry/deployments/pv-25340\\\" \" with result \"range_response_count:0 size:5\" took too long (112.832039ms) to execute\n2019-11-22 03:27:27.831724 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/runtimeclass-0/default\\\" \" with result \"range_response_count:1 size:223\" took too long (101.906147ms) to execute\n2019-11-22 03:27:27.832431 W | etcdserver: read-only range request \"key:\\\"/registry/minions/kind-worker\\\" \" with result \"range_response_count:1 size:3804\" took too long (102.25442ms) to execute\n2019-11-22 03:27:28.415982 W | etcdserver: read-only range request \"key:\\\"/registry/pods/subpath-2857/pod-subpath-test-configmap-bwwz\\\" \" with result \"range_response_count:1 size:1485\" took too long (115.41814ms) to execute\n2019-11-22 03:27:28.443754 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/deployment-1681/\\\" range_end:\\\"/registry/deployments/deployment-16810\\\" \" with result \"range_response_count:1 size:832\" took too long (183.452135ms) to execute\n2019-11-22 03:27:28.738612 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/pv-2534/\\\" range_end:\\\"/registry/podtemplates/pv-25340\\\" \" with result \"range_response_count:0 size:5\" took too long (115.148531ms) to execute\n2019-11-22 03:27:29.061330 W | etcdserver: read-only range request \"key:\\\"/registry/events/job-7873/rand-non-local.15d95e48c708cdf3\\\" \" with result \"range_response_count:1 size:417\" took too long (114.758719ms) to execute\n2019-11-22 03:27:29.093621 W | etcdserver: read-only range request \"key:\\\"/registry/clusterrolebindings/csi-resizer-role-csi-mock-volumes-2478\\\" \" with result \"range_response_count:1 size:401\" took too long (123.645171ms) to execute\n2019-11-22 03:27:29.094067 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/downward-api-2576/\\\" range_end:\\\"/registry/limitranges/downward-api-25760\\\" \" with result \"range_response_count:0 size:5\" took too long (126.330469ms) to execute\n2019-11-22 03:27:29.094575 W | etcdserver: read-only range request \"key:\\\"/registry/networkpolicies/container-runtime-6485/\\\" range_end:\\\"/registry/networkpolicies/container-runtime-64850\\\" \" with result \"range_response_count:0 size:5\" took too long (124.006086ms) to execute\n2019-11-22 03:27:31.677539 W | etcdserver: read-only range request \"key:\\\"/registry/pods/proxy-460/proxy-service-p5m5j-t2d62\\\" \" with result \"range_response_count:1 size:1766\" took too long (116.449646ms) to execute\n2019-11-22 03:27:31.689545 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/proxy-460/proxy-service-p5m5j\\\" \" with result \"range_response_count:1 size:493\" took too long (126.655157ms) to execute\n2019-11-22 03:27:31.692727 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/proxy-460/proxy-service-p5m5j\\\" \" with result \"range_response_count:1 size:493\" took too long (126.856687ms) to execute\n2019-11-22 03:27:31.704021 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foopfn25as/setup-instance\\\" \" with result \"range_response_count:1 size:369\" took too long (107.88176ms) to execute\n2019-11-22 03:27:32.179567 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/proxy-460/proxy-service-p5m5j\\\" \" with result \"range_response_count:1 size:493\" took too long (111.245772ms) to execute\n2019-11-22 03:27:32.186307 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/proxy-460/proxy-service-p5m5j\\\" \" with result \"range_response_count:1 size:493\" took too long (117.872211ms) to execute\n2019-11-22 03:27:32.215129 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/proxy-460/proxy-service-p5m5j\\\" \" with result \"range_response_count:1 size:493\" took too long (146.760933ms) to execute\n2019-11-22 03:27:35.861129 W | etcdserver: request \"header:<ID:912944919111279924 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/namespaces/services-1242\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/namespaces/services-1242\\\" value_size:206 >> failure:<>>\" with result \"size:16\" took too long (168.483558ms) to execute\n2019-11-22 03:27:35.863065 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/subpath-2857\\\" \" with result \"range_response_count:1 size:275\" took too long (165.478586ms) to execute\n2019-11-22 03:27:35.863263 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-3182/hostexec-kind-worker2-4hd56\\\" \" with result \"range_response_count:1 size:774\" took too long (165.236124ms) to execute\n2019-11-22 03:27:35.865407 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-8084/hostexec-kind-worker-xrxt9\\\" \" with result \"range_response_count:1 size:804\" took too long (109.246695ms) to execute\n2019-11-22 03:27:35.865687 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-7357/execpodxjvmc\\\" \" with result \"range_response_count:1 size:1163\" took too long (131.496093ms) to execute\n2019-11-22 03:27:35.866185 W | etcdserver: read-only range request \"key:\\\"/registry/leases/csi-mock-volumes-2478/\\\" range_end:\\\"/registry/leases/csi-mock-volumes-24780\\\" \" with result \"range_response_count:0 size:5\" took too long (167.693287ms) to execute\n2019-11-22 03:27:35.868226 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/configmap-8221/\\\" range_end:\\\"/registry/resourcequotas/configmap-82210\\\" \" with result \"range_response_count:0 size:5\" took too long (169.838343ms) to execute\n2019-11-22 03:27:35.868636 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/persistent-local-volumes-test-6419/default\\\" \" with result \"range_response_count:1 size:228\" took too long (170.465866ms) to execute\n2019-11-22 03:27:35.997265 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volumemode-7887/security-context-e0ccfa3d-5f03-49dc-a397-521e47c43fe6\\\" \" with result \"range_response_count:1 size:1243\" took too long (106.673345ms) to execute\n2019-11-22 03:27:36.021660 W | etcdserver: read-only range request \"key:\\\"/registry/controllerrevisions/configmap-8221/\\\" range_end:\\\"/registry/controllerrevisions/configmap-82210\\\" \" with result \"range_response_count:0 size:5\" took too long (133.33564ms) to execute\n2019-11-22 03:27:36.040701 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:291\" took too long (157.570407ms) to execute\n2019-11-22 03:27:36.136634 W | etcdserver: read-only range request \"key:\\\"/registry/leases/csi-mock-volumes-2478/\\\" range_end:\\\"/registry/leases/csi-mock-volumes-24780\\\" \" with result \"range_response_count:0 size:5\" took too long (251.141202ms) to execute\n2019-11-22 03:27:36.143437 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-4216/pod-subpath-test-local-preprovisionedpv-htvl\\\" \" with result \"range_response_count:0 size:5\" took too long (114.242324ms) to execute\n2019-11-22 03:27:36.149339 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/emptydir-6079\\\" \" with result \"range_response_count:1 size:278\" took too long (136.268626ms) to execute\n2019-11-22 03:27:36.149997 W | etcdserver: request \"header:<ID:912944919111279937 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/serviceaccounts/services-1242/default\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/serviceaccounts/services-1242/default\\\" value_size:116 >> failure:<>>\" with result \"size:16\" took too long (112.952572ms) to execute\n2019-11-22 03:27:36.151651 W | etcdserver: read-only range request \"key:\\\"/registry/controllers/subpath-2857/\\\" range_end:\\\"/registry/controllers/subpath-28570\\\" \" with result \"range_response_count:0 size:5\" took too long (114.068827ms) to execute\n2019-11-22 03:27:36.182951 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-8660/webserver-deployment-595b5b9587-79s47.15d95e4e16161c90\\\" \" with result \"range_response_count:1 size:511\" took too long (111.441629ms) to execute\n2019-11-22 03:27:36.195673 W | etcdserver: read-only range request \"key:\\\"/registry/pods/ephemeral-9382/inline-volume-tester2-h926q\\\" \" with result \"range_response_count:1 size:1328\" took too long (111.38097ms) to execute\n2019-11-22 03:27:36.694574 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/subpath-2857/\\\" range_end:\\\"/registry/services/endpoints/subpath-28570\\\" \" with result \"range_response_count:0 size:5\" took too long (175.345885ms) to execute\n2019-11-22 03:27:37.476172 W | etcdserver: request \"header:<ID:912944919111280047 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/events/csi-mock-volumes-5111/csi-mockplugin-0.15d95e5303e30350\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/events/csi-mock-volumes-5111/csi-mockplugin-0.15d95e5303e30350\\\" value_size:403 lease:912944919111272694 >> failure:<>>\" with result \"size:16\" took too long (125.023083ms) to execute\n2019-11-22 03:27:37.478319 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/emptydir-6079/default\\\" \" with result \"range_response_count:1 size:221\" took too long (126.45988ms) to execute\n2019-11-22 03:27:37.478673 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-8660/webserver-deployment-595b5b9587-98wp2.15d95e4df00b3c0f\\\" \" with result \"range_response_count:1 size:510\" took too long (130.244207ms) to execute\n2019-11-22 03:27:37.478991 W | etcdserver: read-only range request \"key:\\\"/registry/events/configmap-8221/pod-configmaps-cfb023f0-b2f6-4aec-a927-8424d162d52f.15d95e50192c480d\\\" \" with result \"range_response_count:1 size:592\" took too long (147.125086ms) to execute\n2019-11-22 03:27:37.479218 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-495/pod-subpath-test-emptydir-shpn\\\" \" with result \"range_response_count:1 size:1758\" took too long (149.162014ms) to execute\n2019-11-22 03:27:37.479427 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-probe-146/test-webserver-b56520ca-dce2-4dea-b030-d7c1dc01dc59\\\" \" with result \"range_response_count:1 size:1313\" took too long (149.509987ms) to execute\n2019-11-22 03:27:37.479631 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volumemode-7887/\\\" range_end:\\\"/registry/pods/volumemode-78870\\\" \" with result \"range_response_count:1 size:1239\" took too long (149.976861ms) to execute\n2019-11-22 03:27:37.480062 W | etcdserver: read-only range request \"key:\\\"/registry/events/csi-mock-volumes-2478/csi-mockplugin-0.15d95e3d665bc8df\\\" \" with result \"range_response_count:1 size:509\" took too long (150.402106ms) to execute\n2019-11-22 03:27:37.660713 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/downward-api-3508/\\\" range_end:\\\"/registry/resourcequotas/downward-api-35080\\\" \" with result \"range_response_count:0 size:5\" took too long (131.66396ms) to execute\n2019-11-22 03:27:37.661141 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/dns-6138/\\\" range_end:\\\"/registry/replicasets/dns-61380\\\" \" with result \"range_response_count:0 size:5\" took too long (129.623916ms) to execute\n2019-11-22 03:27:37.661430 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/emptydir-6079/\\\" range_end:\\\"/registry/podtemplates/emptydir-60790\\\" \" with result \"range_response_count:0 size:5\" took too long (130.923945ms) to execute\n2019-11-22 03:27:37.661697 W | etcdserver: read-only range request \"key:\\\"/registry/events/csi-mock-volumes-2478/csi-mockplugin-0.15d95e3f40e7dd05\\\" \" with result \"range_response_count:1 size:520\" took too long (131.738433ms) to execute\n2019-11-22 03:27:37.664913 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-8660/webserver-deployment-595b5b9587-9q9r4.15d95e4da5e593a6\\\" \" with result \"range_response_count:1 size:556\" took too long (132.172628ms) to execute\n2019-11-22 03:27:37.665552 W | etcdserver: read-only range request \"key:\\\"/registry/events/configmap-8221/pod-configmaps-cfb023f0-b2f6-4aec-a927-8424d162d52f.15d95e50371f2762\\\" \" with result \"range_response_count:1 size:592\" took too long (135.608965ms) to execute\n2019-11-22 03:27:37.665719 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:309\" took too long (136.505291ms) to execute\n2019-11-22 03:27:37.881545 W | etcdserver: read-only range request \"key:\\\"/registry/events/deployment-8660/webserver-deployment-595b5b9587-9q9r4.15d95e4dfc6ad980\\\" \" with result \"range_response_count:1 size:511\" took too long (129.797857ms) to execute\n2019-11-22 03:27:39.032216 W | etcdserver: read-only range request \"key:\\\"/registry/controllers/downward-api-3508/\\\" range_end:\\\"/registry/controllers/downward-api-35080\\\" \" with result \"range_response_count:0 size:5\" took too long (122.872098ms) to execute\n2019-11-22 03:27:39.033287 W | etcdserver: read-only range request \"key:\\\"/registry/events/dns-6138/dns-test-5e17ebea-4ce3-413d-8559-2df2532d8718.15d95e489da98a4c\\\" \" with result \"range_response_count:1 size:591\" took too long (125.541722ms) to execute\n2019-11-22 03:27:39.034036 W | etcdserver: read-only range requ