This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2023-01-23 11:23
Elapsed1h5m
Revisionmain

No Test Failures!


Error lines from build-log.txt

... skipping 603 lines ...
  Jan 23 11:35:20.486: INFO: Unexpected error listing nodes: Get "https://192.168.6.175:6443/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse&resourceVersion=0": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 11:35:50.786: INFO: Unexpected error listing nodes: Get "https://192.168.6.175:6443/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse&resourceVersion=0": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 57 lines ...
  {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

... skipping 8 lines ...
  Jan 23 11:36:23.865: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:50664->192.168.6.175:6443: read: connection reset by peer

... skipping 7 lines ...
  Jan 23 11:36:26.969: FAIL: failed to create CustomResourceDefinition: Post "https://192.168.6.175:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions": read tcp 172.18.0.3:50716->192.168.6.175:6443: read: connection reset by peer

... skipping 21 lines ...
  Jan 23 11:36:27.351: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:36:27.734: FAIL: Couldn't delete ns: "crd-publish-openapi-9661": Delete "https://192.168.6.175:6443/api/v1/namespaces/crd-publish-openapi-9661": read tcp 172.18.0.3:42782->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/crd-publish-openapi-9661", Err:(*net.OpError)(0xc003f0b400)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003ee3d40, 0x112}, {0xc003d2ec08, 0x6ec4cca, 0xc003d2ec30})

... skipping 21 lines ...
    Jan 23 11:36:26.969: failed to create CustomResourceDefinition: Post "https://192.168.6.175:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions": read tcp 172.18.0.3:50716->192.168.6.175:6443: read: connection reset by peer

... skipping 8 lines ...
  Jan 23 11:36:23.865: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:50660->192.168.6.175:6443: read: connection reset by peer

... skipping 5 lines ...
  Jan 23 11:36:26.969: FAIL: creating namespace for webhook configuration ready markers

  Unexpected error:

      <*url.Error | 0xc003208c30>: {

... skipping 33 lines ...
  Jan 23 11:36:27.291: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:36:27.606: FAIL: Couldn't delete ns: "webhook-3388": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-3388": read tcp 172.18.0.3:42780->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-3388", Err:(*net.OpError)(0xc002b4eaf0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002978480, 0x112}, {0xc002c68c08, 0x6ec4cca, 0xc002c68c30})

... skipping 20 lines ...
    should unconditionally reject operations on fail closed webhook [Conformance] [BeforeEach]

... skipping 3 lines ...
    Unexpected error:

        <*url.Error | 0xc003208c30>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":0,"skipped":41,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:36:30.416: FAIL: failed to wait for definition "com.example.crd-publish-openapi-test-foo.v1.e2e-test-crd-publish-openapi-9675-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: read tcp 172.18.0.3:42798->192.168.6.175:6443: read: connection reset by peer; lastMsg: 

... skipping 22 lines ...
    Jan 23 11:36:30.417: failed to wait for definition "com.example.crd-publish-openapi-test-foo.v1.e2e-test-crd-publish-openapi-9675-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: read tcp 172.18.0.3:42798->192.168.6.175:6443: read: connection reset by peer; lastMsg: 

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":0,"skipped":54,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:36:30.163: FAIL: creating role binding webhook-5701:webhook to access configMap

  Unexpected error:

      <*url.Error | 0xc003cf64e0>: {

... skipping 33 lines ...
  Jan 23 11:36:30.481: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    should unconditionally reject operations on fail closed webhook [Conformance] [BeforeEach]

... skipping 3 lines ...
    Unexpected error:

        <*url.Error | 0xc003cf64e0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":0,"skipped":54,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:36:33.464: FAIL: creating role binding webhook-821:webhook to access configMap

  Unexpected error:

      <*url.Error | 0xc004950e70>: {

... skipping 42 lines ...
    should unconditionally reject operations on fail closed webhook [Conformance] [BeforeEach]

... skipping 3 lines ...
    Unexpected error:

        <*url.Error | 0xc004950e70>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":0,"skipped":54,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":0,"skipped":41,"failed":2,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:36:33.337: FAIL: failed to wait for definition "com.example.crd-publish-openapi-test-foo.v1.e2e-test-crd-publish-openapi-949-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: read tcp 172.18.0.3:42862->192.168.6.175:6443: read: connection reset by peer; lastMsg: 

... skipping 22 lines ...
    Jan 23 11:36:33.337: failed to wait for definition "com.example.crd-publish-openapi-test-foo.v1.e2e-test-crd-publish-openapi-949-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: read tcp 172.18.0.3:42862->192.168.6.175:6443: read: connection reset by peer; lastMsg: 

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":0,"skipped":41,"failed":3,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]"]}

... skipping 16 lines ...
  Jan 23 11:36:36.353: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-247 api-versions:\nCommand stdout:\n\nstderr:\nerror: couldn't get available api versions from server: Get \"https://192.168.6.175:6443/api?timeout=32s\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-247 api-versions:

... skipping 3 lines ...
      error: couldn't get available api versions from server: Get "https://192.168.6.175:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

      
      error:

... skipping 32 lines ...
      Jan 23 11:36:36.353: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-247 api-versions:\nCommand stdout:\n\nstderr:\nerror: couldn't get available api versions from server: Get \"https://192.168.6.175:6443/api?timeout=32s\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-247 api-versions:

... skipping 3 lines ...
          error: couldn't get available api versions from server: Get "https://192.168.6.175:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":0,"skipped":60,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]"]}

... skipping 18 lines ...
  Jan 23 11:36:42.195: FAIL: Couldn't delete ns: "kubectl-4683": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-4683": read tcp 172.18.0.3:51182->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/kubectl-4683", Err:(*net.OpError)(0xc002f13a90)})

... skipping 22 lines ...
      Jan 23 11:36:42.195: Couldn't delete ns: "kubectl-4683": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-4683": read tcp 172.18.0.3:51182->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/kubectl-4683", Err:(*net.OpError)(0xc002f13a90)})

... skipping 29 lines ...
  {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":34,"failed":0}

... skipping 12 lines ...
  Jan 23 11:36:45.156: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00303ef30>: {

... skipping 39 lines ...
  Jan 23 11:36:45.537: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:36:45.919: FAIL: Couldn't delete ns: "containers-7607": Delete "https://192.168.6.175:6443/api/v1/namespaces/containers-7607": read tcp 172.18.0.3:51268->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/containers-7607", Err:(*net.OpError)(0xc0018525a0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000628b40, 0x112}, {0xc004afac08, 0x6ec4cca, 0xc004afac30})

... skipping 21 lines ...
    Jan 23 11:36:45.156: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc00303ef30>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":0,"skipped":60,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:36:42.520: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 16 lines ...
  {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":1,"skipped":60,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]"]}

... skipping 11 lines ...
  Jan 23 11:36:51.247: FAIL: creating namespace for webhook configuration ready markers

  Unexpected error:

      <*url.Error | 0xc004e8b350>: {

... skipping 33 lines ...
  Jan 23 11:36:51.571: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:36:51.955: FAIL: Couldn't delete ns: "webhook-9134": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-9134": read tcp 172.18.0.3:53598->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-9134", Err:(*net.OpError)(0xc000bc11d0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00495ac60, 0x112}, {0xc002c68c08, 0x6ec4cca, 0xc002c68c30})

... skipping 24 lines ...
    Unexpected error:

        <*url.Error | 0xc004e8b350>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":1,"skipped":108,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:36:54.370: FAIL: creating role binding webhook-2388:webhook to access configMap

  Unexpected error:

      <*url.Error | 0xc0050e51a0>: {

... skipping 33 lines ...
  Jan 23 11:36:54.748: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 23 lines ...
    Unexpected error:

        <*url.Error | 0xc0050e51a0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":1,"skipped":108,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:36:57.791: FAIL: creating role binding webhook-9563:webhook to access configMap

  Unexpected error:

      <*url.Error | 0xc004f0ea80>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Post "https://192.168.6.175:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 31 lines ...
    Unexpected error:

        <*url.Error | 0xc004f0ea80>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Post "https://192.168.6.175:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":1,"skipped":108,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:37:00.452: FAIL: error deleting Service

  Unexpected error:

      <*url.Error | 0xc002f10000>: {

... skipping 33 lines ...
  Jan 23 11:37:00.710: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:37:01.074: FAIL: Couldn't delete ns: "endpointslice-9597": Delete "https://192.168.6.175:6443/api/v1/namespaces/endpointslice-9597": read tcp 172.18.0.3:51804->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/endpointslice-9597", Err:(*net.OpError)(0xc003c62f50)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004f5a900, 0x112}, {0xc00433ac08, 0x6ec4cca, 0xc00433ac30})

... skipping 21 lines ...
    Jan 23 11:37:00.452: error deleting Service

    Unexpected error:

        <*url.Error | 0xc002f10000>: {

... skipping 31 lines ...
  Jan 23 11:36:26.777: INFO: Waiting up to 5m0s for pod "pod-secrets-56d6dac4-f137-468b-b1e6-345e5648b956" in namespace "secrets-6780" to be "Succeeded or Failed"

... skipping 14 lines ...
  Jan 23 11:37:01.009: INFO: Pod "pod-secrets-56d6dac4-f137-468b-b1e6-345e5648b956" satisfied condition "Succeeded or Failed"

... skipping 12 lines ...
  {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":1,"skipped":116,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]}

... skipping 17 lines ...
  {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":2,"skipped":116,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:37:06.375: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc003f4e390>: {

... skipping 33 lines ...
  Jan 23 11:37:06.764: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:37:07.102: FAIL: Couldn't delete ns: "emptydir-9444": Delete "https://192.168.6.175:6443/api/v1/namespaces/emptydir-9444": read tcp 172.18.0.3:58304->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/emptydir-9444", Err:(*net.OpError)(0xc00332a960)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00159a480, 0x112}, {0xc00433ac08, 0x6ec4cca, 0xc00433ac30})

... skipping 21 lines ...
    Jan 23 11:37:06.375: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc003f4e390>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":2,"skipped":144,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]}

... skipping 22 lines ...
  {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":3,"skipped":144,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]}

... skipping 16 lines ...
  Jan 23 11:36:38.536: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jrrq" in namespace "subpath-1428" to be "Succeeded or Failed"

... skipping 18 lines ...
  Jan 23 11:37:20.957: INFO: Pod "pod-subpath-test-projected-jrrq" satisfied condition "Succeeded or Failed"

... skipping 13 lines ...
  {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":1,"skipped":123,"failed":3,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":51,"failed":1,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:36:47.323: INFO: Waiting up to 5m0s for pod "client-containers-1f917d84-28f1-4a0d-ae97-f2477460cab4" in namespace "containers-411" to be "Succeeded or Failed"

... skipping 16 lines ...
  Jan 23 11:37:24.341: INFO: Pod "client-containers-1f917d84-28f1-4a0d-ae97-f2477460cab4" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":51,"failed":1,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]"]}

... skipping 14 lines ...
  Jan 23 11:37:27.553: FAIL: Unexpected error:

      <*errors.errorString | 0xc004053080>: {
          s: "failed to create TCP Service \"nodeport-test\": Post \"https://192.168.6.175:6443/api/v1/namespaces/services-6751/services\": read tcp 172.18.0.3:36546->192.168.6.175:6443: read: connection reset by peer",

      }
      failed to create TCP Service "nodeport-test": Post "https://192.168.6.175:6443/api/v1/namespaces/services-6751/services": read tcp 172.18.0.3:36546->192.168.6.175:6443: read: connection reset by peer

... skipping 16 lines ...
  Jan 23 11:37:27.942: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:37:28.258: FAIL: Couldn't delete ns: "services-6751": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-6751": read tcp 172.18.0.3:51622->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/services-6751", Err:(*net.OpError)(0xc003eecc80)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001c865a0, 0x112}, {0xc004afac08, 0x6ec4cca, 0xc004afac30})

... skipping 23 lines ...
    Jan 23 11:37:27.553: Unexpected error:

        <*errors.errorString | 0xc004053080>: {
            s: "failed to create TCP Service \"nodeport-test\": Post \"https://192.168.6.175:6443/api/v1/namespaces/services-6751/services\": read tcp 172.18.0.3:36546->192.168.6.175:6443: read: connection reset by peer",

        }
        failed to create TCP Service "nodeport-test": Post "https://192.168.6.175:6443/api/v1/namespaces/services-6751/services": read tcp 172.18.0.3:36546->192.168.6.175:6443: read: connection reset by peer

... skipping 14 lines ...
  Jan 23 11:37:05.034: INFO: Waiting up to 5m0s for pod "pod-secrets-0ea6f3be-0f1a-4f35-b047-8692e343a4c6" in namespace "secrets-8794" to be "Succeeded or Failed"

... skipping 16 lines ...
  Jan 23 11:37:42.564: INFO: Pod "pod-secrets-0ea6f3be-0f1a-4f35-b047-8692e343a4c6" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":103,"failed":0}

... skipping 48 lines ...
  Jan 23 11:37:45.373: FAIL: failed to execute command in pod test-pod, container busybox-1: Timeout occurred

  Unexpected error:

... skipping 28 lines ...
  E0123 11:37:45.943653      18 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:58436->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:37:45.943: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:58436->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 11:37:46.268: FAIL: Couldn't delete ns: "e2e-kubelet-etc-hosts-6995": Delete "https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6995": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-6995", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0030acb00), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000423b80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002c801c0, 0xd3}, {0xc002124c08, 0x6ec4cca, 0xc002124c30})

... skipping 21 lines ...
    Jan 23 11:37:45.373: failed to execute command in pod test-pod, container busybox-1: Timeout occurred

    Unexpected error:

... skipping 13 lines ...
  Jan 23 11:37:21.663: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:36442->192.168.6.175:6443: read: connection reset by peer

... skipping 24 lines ...
  {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":164,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":0,"skipped":7,"failed":1,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:37:46.619: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:48654->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:37:48.889: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:55800->192.168.6.175:6443: read: connection reset by peer

... skipping 5 lines ...
  Jan 23 11:37:51.907: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc003ad6000>: {

... skipping 39 lines ...
  Jan 23 11:37:52.269: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:37:52.632: FAIL: Couldn't delete ns: "e2e-kubelet-etc-hosts-7814": Delete "https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7814": read tcp 172.18.0.3:55866->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-7814", Err:(*net.OpError)(0xc0037a42d0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0030f8360, 0x112}, {0xc002124c08, 0x6ec4cca, 0xc002124c30})

... skipping 21 lines ...
    Jan 23 11:37:51.907: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc003ad6000>: {

... skipping 30 lines ...
  Jan 23 11:37:47.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1206dec3-682e-4b0e-b53f-caa388dc0bdf" in namespace "projected-864" to be "Succeeded or Failed"

... skipping 8 lines ...
  Jan 23 11:38:02.054: INFO: Pod "downwardapi-volume-1206dec3-682e-4b0e-b53f-caa388dc0bdf" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":110,"failed":0}

... skipping 37 lines ...
  {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":190,"failed":3,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":0,"skipped":7,"failed":2,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 55 lines ...
  W0123 11:39:01.814014      18 http.go:498] Error reading backend response: read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:39:01.814: INFO: Exec stderr: ""
  Jan 23 11:39:01.814: FAIL: failed to execute command in pod test-pod, container busybox-1: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5568/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true": read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer

  Unexpected error:

      <*errors.errorString | 0xc0034f25a0>: {
          s: "error sending request: Post \"https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5568/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true\": read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer",

      }
      error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5568/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true": read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer

... skipping 33 lines ...
    Jan 23 11:39:01.814: failed to execute command in pod test-pod, container busybox-1: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5568/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true": read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer

    Unexpected error:

        <*errors.errorString | 0xc0034f25a0>: {
            s: "error sending request: Post \"https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5568/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true\": read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer",

        }
        error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-5568/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true": read tcp 172.18.0.3:36676->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  {"msg":"FAILED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":0,"skipped":7,"failed":3,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 29 lines ...
  {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":257,"failed":3,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]"]}

... skipping 14 lines ...
  Jan 23 11:39:19.929: FAIL: Unexpected error:

      <*url.Error | 0xc002044db0>: {

... skipping 33 lines ...
  Jan 23 11:39:20.323: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:39:20.715: FAIL: Couldn't delete ns: "replication-controller-7319": Delete "https://192.168.6.175:6443/api/v1/namespaces/replication-controller-7319": read tcp 172.18.0.3:56874->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replication-controller-7319", Err:(*net.OpError)(0xc00397c370)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001336c60, 0x112}, {0xc0012d0c08, 0x6ec4cca, 0xc0012d0c30})

... skipping 21 lines ...
    Jan 23 11:39:19.929: Unexpected error:

        <*url.Error | 0xc002044db0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":278,"failed":4,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]"]}

... skipping 18 lines ...
  Jan 23 11:39:22.917: FAIL: Couldn't delete ns: "replication-controller-5211": Delete "https://192.168.6.175:6443/api/v1/namespaces/replication-controller-5211": read tcp 172.18.0.3:56882->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replication-controller-5211", Err:(*net.OpError)(0xc003136690)})

... skipping 20 lines ...
    Jan 23 11:39:22.917: Couldn't delete ns: "replication-controller-5211": Delete "https://192.168.6.175:6443/api/v1/namespaces/replication-controller-5211": read tcp 172.18.0.3:56882->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replication-controller-5211", Err:(*net.OpError)(0xc003136690)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":278,"failed":5,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:39:23.220: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 16 lines ...
  {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":4,"skipped":278,"failed":5,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]"]}

... skipping 46 lines ...
  {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":111,"failed":0}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":3,"skipped":65,"failed":2,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

... skipping 17 lines ...
  E0123 11:37:43.131194      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-515/pods?allowWatchBookmarks=true&labelSelector=name%3Dnodeport-test&resourceVersion=3262&timeout=5m52s&timeoutSeconds=352&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  E0123 11:38:16.579727      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-515/pods?allowWatchBookmarks=true&labelSelector=name%3Dnodeport-test&resourceVersion=3520&timeout=5m39s&timeoutSeconds=339&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  I0123 11:38:17.895925      17 runners.go:193] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
  W0123 11:38:19.577544      17 reflector.go:324] k8s.io/kubernetes/test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-515/pods?labelSelector=name%3Dnodeport-test&resourceVersion=3520": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 11:38:19.577632      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-515/pods?labelSelector=name%3Dnodeport-test&resourceVersion=3520": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
  Jan 23 11:39:44.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-515 exec execpod6l68x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.111.57.43 80:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:39:47.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-515 exec execpod6l68x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.6.75 31656:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:39:50.849: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-515 exec execpod6l68x -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.6.75 31656:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 22 lines ...
  {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":4,"skipped":65,"failed":2,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

... skipping 42 lines ...
  Jan 23 11:39:59.578: FAIL: Failed to delete pod "kube-proxy-mode-detector": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-179/pods/kube-proxy-mode-detector": read tcp 172.18.0.3:49224->192.168.6.175:6443: read: connection reset by peer

... skipping 19 lines ...
  Jan 23 11:39:59.907: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 21 lines ...
    Jan 23 11:39:59.578: Failed to delete pod "kube-proxy-mode-detector": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-179/pods/kube-proxy-mode-detector": read tcp 172.18.0.3:49224->192.168.6.175:6443: read: connection reset by peer

... skipping 25 lines ...
  {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":79,"failed":2,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

... skipping 36 lines ...
  Jan 23 11:40:20.685: FAIL: failed to create CustomResourceDefinition: Post "https://192.168.6.175:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions": read tcp 172.18.0.3:43492->192.168.6.175:6443: read: connection reset by peer

... skipping 17 lines ...
  Jan 23 11:40:21.081: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
  Jan 23 11:40:21.907: FAIL: Couldn't delete ns: "webhook-2157": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-2157": read tcp 172.18.0.3:53874->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-2157", Err:(*net.OpError)(0xc001be6dc0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002a7c360, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 23 lines ...
    Jan 23 11:40:20.685: failed to create CustomResourceDefinition: Post "https://192.168.6.175:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions": read tcp 172.18.0.3:43492->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":4,"skipped":148,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:40:23.677: FAIL: error labeling namespace webhook-7107

  Unexpected error:

      <*url.Error | 0xc0043c24e0>: {

... skipping 33 lines ...
  Jan 23 11:40:24.005: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:40:24.373: FAIL: Couldn't delete ns: "webhook-7107": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-7107": read tcp 172.18.0.3:53914->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-7107", Err:(*net.OpError)(0xc00363f6d0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002ac3e60, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 23 lines ...
    Jan 23 11:40:23.677: error labeling namespace webhook-7107

    Unexpected error:

        <*url.Error | 0xc0043c24e0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":4,"skipped":148,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:40:27.262: FAIL: creating role binding webhook-6955:webhook to access configMap

  Unexpected error:

      <*url.Error | 0xc00462e360>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Post "https://192.168.6.175:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 31 lines ...
    Unexpected error:

        <*url.Error | 0xc00462e360>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Post "https://192.168.6.175:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":4,"skipped":148,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:40:05.299: INFO: Waiting up to 5m0s for pod "pod-1b786138-379e-47b6-a20c-5b00690666d7" in namespace "emptydir-7144" to be "Succeeded or Failed"

... skipping 15 lines ...
  Jan 23 11:40:41.351: INFO: Pod "pod-1b786138-379e-47b6-a20c-5b00690666d7" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":121,"failed":2,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:40:44.997: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc002255ad0>: {

... skipping 41 lines ...
  Jan 23 11:40:45.329: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:40:45.760: FAIL: Couldn't delete ns: "emptydir-4904": Delete "https://192.168.6.175:6443/api/v1/namespaces/emptydir-4904": read tcp 172.18.0.3:52962->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/emptydir-4904", Err:(*net.OpError)(0xc003a10050)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000628d80, 0x112}, {0xc004afac08, 0x6ec4cca, 0xc004afac30})

... skipping 21 lines ...
    Jan 23 11:40:44.997: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc002255ad0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":43,"failed":4,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 38 lines ...
  Jan 23 11:40:44.932: FAIL: failed to create replication controller with service in the namespace: services-7294

  Unexpected error:

      <*url.Error | 0xc001b746f0>: {

... skipping 33 lines ...
  Jan 23 11:40:45.327: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:40:45.760: FAIL: Couldn't delete ns: "services-7294": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-7294": read tcp 172.18.0.3:52960->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/services-7294", Err:(*net.OpError)(0xc0037a4ff0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000983440, 0x112}, {0xc002124c08, 0x6ec4cca, 0xc002124c30})

... skipping 23 lines ...
    Jan 23 11:40:44.932: failed to create replication controller with service in the namespace: services-7294

    Unexpected error:

        <*url.Error | 0xc001b746f0>: {

... skipping 28 lines ...
  Jan 23 11:40:31.565: INFO: Waiting up to 5m0s for pod "var-expansion-45be5467-4d69-4ec1-b4fc-604f3c54207e" in namespace "var-expansion-2040" to be "Succeeded or Failed"

... skipping 15 lines ...
  Jan 23 11:41:04.170: INFO: Pod "var-expansion-45be5467-4d69-4ec1-b4fc-604f3c54207e" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":151,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

... skipping 101 lines ...
  {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":6,"skipped":187,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":165,"failed":3,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:40:46.939: INFO: Waiting up to 5m0s for pod "pod-f1c15943-23c5-4620-a450-2a20be6e0f4f" in namespace "emptydir-8862" to be "Succeeded or Failed"

... skipping 19 lines ...
  Jan 23 11:41:31.541: INFO: Pod "pod-f1c15943-23c5-4620-a450-2a20be6e0f4f" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":165,"failed":3,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:41:15.295: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:51916->192.168.6.175:6443: read: connection reset by peer

... skipping 20 lines ...
  Jan 23 11:41:51.682: FAIL: failed to patch Deployment

  Unexpected error:

      <*url.Error | 0xc000601d10>: {

... skipping 30 lines ...
  Jan 23 11:41:52.011: INFO: Could not list Deployments in namespace "deployment-365": Get "https://192.168.6.175:6443/apis/apps/v1/namespaces/deployment-365/deployments": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
    Jan 23 11:41:51.682: failed to patch Deployment

    Unexpected error:

        <*url.Error | 0xc000601d10>: {

... skipping 24 lines ...
  Jan 23 11:39:29.064: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:50656->192.168.6.175:6443: read: connection reset by peer

... skipping 27 lines ...
  {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":303,"failed":5,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:42:06.346: INFO: Waiting up to 5m0s for pod "pod-secrets-1734e6a4-5557-4b01-926e-b715e3527429" in namespace "secrets-9397" to be "Succeeded or Failed"

... skipping 8 lines ...
  Jan 23 11:42:23.797: INFO: Pod "pod-secrets-1734e6a4-5557-4b01-926e-b715e3527429" satisfied condition "Succeeded or Failed"

... skipping 8 lines ...
  Jan 23 11:42:25.092: FAIL: Couldn't delete ns: "secrets-9397": Delete "https://192.168.6.175:6443/api/v1/namespaces/secrets-9397": read tcp 172.18.0.3:48174->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/secrets-9397", Err:(*net.OpError)(0xc0041d9c70)})

... skipping 20 lines ...
    Jan 23 11:42:25.092: Couldn't delete ns: "secrets-9397": Delete "https://192.168.6.175:6443/api/v1/namespaces/secrets-9397": read tcp 172.18.0.3:48174->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/secrets-9397", Err:(*net.OpError)(0xc0041d9c70)})

... skipping 25 lines ...
  {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":8,"skipped":192,"failed":3,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 43 lines ...
  Jan 23 11:39:44.573: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4553 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 12 lines ...
  Jan 23 11:40:11.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4553 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/statefulset-4553/pods/ss2-1/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=mv+-v+%2Ftmp%2Findex.html+%2Fusr%2Flocal%2Fapache2%2Fhtdocs%2F+%7C%7C+true&container=webserver&stderr=true&stdout=true": read tcp 172.18.0.3:43408->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 19 lines ...
  Jan 23 11:41:15.885: FAIL: Failed waiting for state update: Get "https://192.168.6.175:6443/apis/apps/v1/namespaces/statefulset-4553/statefulsets/ss2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 302 lines ...
      Jan 23 11:41:15.885: Failed waiting for state update: Get "https://192.168.6.175:6443/apis/apps/v1/namespaces/statefulset-4553/statefulsets/ss2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":320,"failed":6,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:42:25.488: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 5 lines ...
  Jan 23 11:42:30.461: INFO: Waiting up to 5m0s for pod "pod-secrets-8102b1ff-ecf0-48db-aba5-b3ef818e9224" in namespace "secrets-1503" to be "Succeeded or Failed"

... skipping 22 lines ...
  Jan 23 11:43:26.983: INFO: Pod "pod-secrets-8102b1ff-ecf0-48db-aba5-b3ef818e9224" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":320,"failed":6,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":6,"skipped":193,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]"]}

... skipping 23 lines ...
  E0123 11:41:58.236170      35 retrywatcher.go:130] "Watch failed" err="Get \"https://192.168.6.175:6443/apis/apps/v1/namespaces/deployment-4692/deployments?allowWatchBookmarks=true&labelSelector=test-deployment-static%3Dtrue&resourceVersion=5397&watch=true\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"

... skipping 32 lines ...
  Jan 23 11:43:25.733: FAIL: failed to update the DeploymentStatus

  Unexpected error:

      <*url.Error | 0xc0035ac030>: {

... skipping 30 lines ...
  Jan 23 11:43:26.116: INFO: Could not list Deployments in namespace "deployment-4692": Get "https://192.168.6.175:6443/apis/apps/v1/namespaces/deployment-4692/deployments": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
    Jan 23 11:43:25.733: failed to update the DeploymentStatus

    Unexpected error:

        <*url.Error | 0xc0035ac030>: {

... skipping 39 lines ...
  Jan 23 11:42:46.320: FAIL: Failed to update status. Put "https://192.168.6.175:6443/apis/apps/v1/namespaces/statefulset-1658/statefulsets/ss/status": read tcp 172.18.0.3:44996->192.168.6.175:6443: read: connection reset by peer

  Unexpected error:

      <*url.Error | 0xc0022549f0>: {

... skipping 38 lines ...
  Jan 23 11:43:31.840: FAIL: Couldn't delete ns: "statefulset-1658": Delete "https://192.168.6.175:6443/api/v1/namespaces/statefulset-1658": read tcp 172.18.0.3:59208->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/statefulset-1658", Err:(*net.OpError)(0xc0045932c0)})

... skipping 22 lines ...
      Jan 23 11:42:46.320: Failed to update status. Put "https://192.168.6.175:6443/apis/apps/v1/namespaces/statefulset-1658/statefulsets/ss/status": read tcp 172.18.0.3:44996->192.168.6.175:6443: read: connection reset by peer

      Unexpected error:

          <*url.Error | 0xc0022549f0>: {

... skipping 25 lines ...
  [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

... skipping 10 lines ...
  {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":7,"skipped":346,"failed":6,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 30 lines ...
  Jan 23 11:45:05.899: FAIL: Unexpected error:

      <*url.Error | 0xc00393e840>: {

... skipping 31 lines ...
  Jan 23 11:45:06.284: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:45:06.598: FAIL: Couldn't delete ns: "endpointslice-6257": Delete "https://192.168.6.175:6443/api/v1/namespaces/endpointslice-6257": read tcp 172.18.0.3:57032->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/endpointslice-6257", Err:(*net.OpError)(0xc003f0a550)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00121ec60, 0x112}, {0xc0012d0c08, 0x6ec4cca, 0xc0012d0c30})

... skipping 21 lines ...
    Jan 23 11:45:05.899: Unexpected error:

        <*url.Error | 0xc00393e840>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":7,"skipped":368,"failed":7,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]"]}

... skipping 14 lines ...
  Jan 23 11:45:08.896: FAIL: Unexpected error:

      <*url.Error | 0xc0026df950>: {

... skipping 31 lines ...
  Jan 23 11:45:09.232: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:45:09.616: FAIL: Couldn't delete ns: "endpointslice-2080": Delete "https://192.168.6.175:6443/api/v1/namespaces/endpointslice-2080": read tcp 172.18.0.3:42362->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/endpointslice-2080", Err:(*net.OpError)(0xc003d5b590)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000524ea0, 0x112}, {0xc0012d0c08, 0x6ec4cca, 0xc0012d0c30})

... skipping 21 lines ...
    Jan 23 11:45:08.896: Unexpected error:

        <*url.Error | 0xc0026df950>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":43,"failed":5,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 48 lines ...
  E0123 11:41:49.022045      18 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-7592/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-clusterip-timeout&resourceVersion=5277&timeout=7m12s&timeoutSeconds=432&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
  Jan 23 11:43:35.318: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7592 exec execpod-affinity26dm2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:43:38.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7592 exec execpod-affinity26dm2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.98.252 80:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:43:41.243: INFO: Failed to get response from 10.99.98.252:80. Retry until timeout

... skipping 2 lines ...
  Jan 23 11:44:11.644: INFO: Failed to get response from 10.99.98.252:80. Retry until timeout

... skipping 26 lines ...
  Jan 23 11:45:09.173: FAIL: failed to delete pod: execpod-affinity26dm2 in namespace: services-7592

  Unexpected error:

      <*url.Error | 0xc0042dda10>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Delete "https://192.168.6.175:6443/api/v1/namespaces/services-7592/pods/execpod-affinity26dm2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 32 lines ...
    Jan 23 11:45:09.173: failed to delete pod: execpod-affinity26dm2 in namespace: services-7592

    Unexpected error:

        <*url.Error | 0xc0042dda10>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Delete "https://192.168.6.175:6443/api/v1/namespaces/services-7592/pods/execpod-affinity26dm2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":43,"failed":6,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":7,"skipped":368,"failed":8,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]"]}

... skipping 34 lines ...
  {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":8,"skipped":368,"failed":8,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":8,"skipped":196,"failed":4,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:43:32.106: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 51 lines ...
  {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":9,"skipped":196,"failed":4,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:45:23.270: INFO: Waiting up to 5m0s for pod "downward-api-f9e43d4a-9bf2-4bbc-a9de-40bbdc4fbb79" in namespace "downward-api-788" to be "Succeeded or Failed"

... skipping 9 lines ...
  Jan 23 11:45:44.393: INFO: Pod "downward-api-f9e43d4a-9bf2-4bbc-a9de-40bbdc4fbb79" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":461,"failed":8,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":4,"skipped":176,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]}

... skipping 37 lines ...
  Jan 23 11:44:23.440: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-822 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true:

... skipping 3 lines ...
  W0123 11:44:23.437167     397 http.go:498] Error reading backend response: read tcp 172.18.0.3:42934->192.168.6.175:6443: read: connection reset by peer

  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/statefulset-822/pods/ss2-1/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=mv+-v+%2Fusr%2Flocal%2Fapache2%2Fhtdocs%2Findex.html+%2Ftmp%2F+%7C%7C+true&container=webserver&stderr=true&stdout=true": read tcp 172.18.0.3:42934->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 6 lines ...
  E0123 11:44:35.625265      21 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:44:35.625: FAIL: Unexpected error:

      <*fmt.wrapError | 0xc003da3720>: {
          msg: "unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer",

... skipping 12 lines ...
      unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer

... skipping 33 lines ...
  E0123 11:44:35.626558      21 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 23 11:44:35.625: Unexpected error:\n    <*fmt.wrapError | 0xc003da3720>: {\n        msg: \"unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer\",\n        err: {\n            Op: \"read\",\n            Net: \"tcp\",\n            Source: {IP: [172, 18, 0, 3], Port: 34230, Zone: \"\"},\n            Addr: {\n                IP: [192, 168, 6, 175],\n                Port: 6443,\n                Zone: \"\",\n            },\n            Err: {Syscall: \"read\", Err: 0x68},\n        },\n    }\n    unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer\noccurred", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x78eb710, 0xc003d5c900}, 0xc000b5c500)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForState.func1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:70 +0xdf\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f6c402454f8, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000056080}, 0xc00433ae58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000056080}, 0x98, 0x2bb9f85, 0x28)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000056080}, 0xc00433af00, 0xc00433aee8, 0x2378d47)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc0050e4ab0, 0x0, 0x37ffb29)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForState({0x78eb710, 0xc003d5c900}, 0xc000aec000, 0xc0050e4ab0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:64 +0xc5\nk8s.io/kubernetes/test/e2e/apps.waitForPodNotReady({0x78eb710, 0xc003d5c900}, 0xc000aec000, {0xc00349fa7a, 0x5})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/wait.go:101 +0x109\nk8s.io/kubernetes/test/e2e/apps.rollbackTest({0x78eb710, 0xc003d5c900}, {0xc0047a9b10, 0xf}, 0xc00056c000)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1581 +0x6fb\nk8s.io/kubernetes/test/e2e/apps.glob..func9.2.7()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:305 +0xe6\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x2371919)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0006824e0, 0x71566f0)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} (

  Your test failed.

... skipping 17 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc00415af00, 0x2de}, {0xc00433a9b8, 0x0, 0x40})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00415af00, 0x2de}, {0xc00433aa98, 0x6ec4cca, 0xc00433aab8})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc00415ac00, 0x2c9}, {0xc004e48e90, 0xc00415ac00, 0xc003da3720})

... skipping 81 lines ...
      Jan 23 11:44:35.625: Unexpected error:

          <*fmt.wrapError | 0xc003da3720>: {
              msg: "unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer",

... skipping 12 lines ...
          unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:34230->192.168.6.175:6443: read: connection reset by peer

... skipping 26 lines ...
  {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":10,"skipped":198,"failed":4,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:45:48.310: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e855fe8-726b-4385-bf7f-ddbfb378fe5e" in namespace "projected-3377" to be "Succeeded or Failed"

... skipping 11 lines ...
  Jan 23 11:46:13.900: INFO: Pod "pod-projected-configmaps-3e855fe8-726b-4385-bf7f-ddbfb378fe5e" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":463,"failed":8,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":6,"skipped":193,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]"]}

... skipping 106 lines ...
  {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":7,"skipped":193,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]"]}

... skipping 24 lines ...
  {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":8,"skipped":226,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]"]}

... skipping 22 lines ...
  {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":235,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]"]}

... skipping 15 lines ...
  Jan 23 11:46:24.894: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc0043c3f20>: {

... skipping 41 lines ...
  Jan 23 11:46:25.281: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:46:25.555: FAIL: Couldn't delete ns: "pod-network-test-1956": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-1956": read tcp 172.18.0.3:60620->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-1956", Err:(*net.OpError)(0xc00443af00)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0032d5440, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 23 lines ...
      Jan 23 11:46:24.894: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc0043c3f20>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":248,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:46:27.971: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00457c9c0>: {

... skipping 41 lines ...
  Jan 23 11:46:28.295: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:46:28.629: FAIL: Couldn't delete ns: "pod-network-test-6402": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-6402": read tcp 172.18.0.3:42786->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-6402", Err:(*net.OpError)(0xc0020ca0a0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001b8cfc0, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 23 lines ...
      Jan 23 11:46:27.971: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc00457c9c0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":248,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:46:30.940: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc002ca6750>: {

... skipping 41 lines ...
  Jan 23 11:46:31.239: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:46:31.624: FAIL: Couldn't delete ns: "pod-network-test-1447": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-1447": read tcp 172.18.0.3:42850->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-1447", Err:(*net.OpError)(0xc001be7ae0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0022f07e0, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 23 lines ...
      Jan 23 11:46:30.940: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc002ca6750>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":248,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 18 lines ...
  Jan 23 11:46:40.091: FAIL: creating service e2e-test-webhook in namespace webhook-3391

  Unexpected error:

      <*url.Error | 0xc0034087b0>: {

... skipping 33 lines ...
  Jan 23 11:46:40.413: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
  Jan 23 11:46:41.314: FAIL: Couldn't delete ns: "webhook-3391": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-3391": read tcp 172.18.0.3:51056->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-3391", Err:(*net.OpError)(0xc00381d630)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0027a10e0, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 24 lines ...
    Unexpected error:

        <*url.Error | 0xc0034087b0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":9,"skipped":285,"failed":9,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:46:42.992: FAIL: error labeling namespace webhook-5799

  Unexpected error:

      <*url.Error | 0xc0034c3a10>: {

... skipping 33 lines ...
  Jan 23 11:46:43.377: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:46:43.762: FAIL: Couldn't delete ns: "webhook-5799": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-5799": read tcp 172.18.0.3:51092->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-5799", Err:(*net.OpError)(0xc0022fc870)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0032defc0, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 23 lines ...
    Jan 23 11:46:42.992: error labeling namespace webhook-5799

    Unexpected error:

        <*url.Error | 0xc0034c3a10>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":9,"skipped":285,"failed":10,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:46:46.080: FAIL: creating namespace for webhook configuration ready markers

  Unexpected error:

      <*url.Error | 0xc004741f20>: {

... skipping 33 lines ...
  Jan 23 11:46:46.466: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:46:46.781: FAIL: Couldn't delete ns: "webhook-3692": Delete "https://192.168.6.175:6443/api/v1/namespaces/webhook-3692": read tcp 172.18.0.3:49984->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/webhook-3692", Err:(*net.OpError)(0xc0000ab680)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0022f0ea0, 0x112}, {0xc002ed2c08, 0x6ec4cca, 0xc002ed2c30})

... skipping 24 lines ...
    Unexpected error:

        <*url.Error | 0xc004741f20>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":9,"skipped":285,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

... skipping 11 lines ...
  [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

... skipping 3 lines ...
  E0123 11:47:04.732681      17 retrywatcher.go:130] "Watch failed" err="Get \"https://192.168.6.175:6443/api/v1/namespaces/init-container-9502/pods?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dpod-init-5c30ee18-f537-4b3c-ade5-902e4c2e3e3d&resourceVersion=8109&watch=true\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"

  Jan 23 11:47:15.803: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5c30ee18-f537-4b3c-ade5-902e4c2e3e3d", GenerateName:"", Namespace:"init-container-9502", SelfLink:"", UID:"bd39c55a-ef93-4ef2-b289-7a1067fe056e", ResourceVersion:"8160", Generation:0, CreationTimestamp:time.Date(2023, time.January, 23, 11, 45, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"301788065"}, Annotations:map[string]string{"cni.projectcalico.org/containerID":"636a3e1485ae591209e43c6865e88eb2f0be2bb7346c91abd32bd5968cb6b0ee", "cni.projectcalico.org/podIP":"192.168.30.70/32", "cni.projectcalico.org/podIPs":"192.168.30.70/32"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 23, 11, 45, 56, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00429c0d8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"calico", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 23, 11, 46, 30, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00429c108), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"Go-http-client", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 23, 11, 46, 31, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00429c138), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-6z9d8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc004106060), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-6z9d8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-6z9d8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-6z9d8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010f0158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-conformance-8hxc51-md-0-75bfdd6df6-9nww5", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004068000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010f01d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010f01f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010f01f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010f01fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00184a310), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 23, 11, 46, 29, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 23, 11, 46, 29, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 23, 11, 46, 29, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 23, 11, 45, 56, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"192.168.6.61", PodIP:"192.168.30.70", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.30.70"}}, StartTime:time.Date(2023, time.January, 23, 11, 46, 29, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0040680e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004068150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://36ef51f1cbc9a27f76fdb60d78820d60533f0ffe188b79303a780c2caf134a43", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004106140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004106100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc0010f027f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

... skipping 7 lines ...
  {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":11,"skipped":204,"failed":4,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:47:16.453: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:46138->192.168.6.175:6443: read: connection reset by peer

... skipping 23 lines ...
  {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":12,"skipped":207,"failed":4,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:46:48.812: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2f02e3a5-a1a1-4351-9759-df8289522a4e" in namespace "security-context-test-1599" to be "Succeeded or Failed"

... skipping 20 lines ...
  Jan 23 11:47:36.008: INFO: Pod "alpine-nnp-false-2f02e3a5-a1a1-4351-9759-df8289522a4e" satisfied condition "Succeeded or Failed"

... skipping 7 lines ...
  {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":288,"failed":11,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]"]}

... skipping 15 lines ...
  Jan 23 11:47:37.772: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc004ae40f0>: {

... skipping 41 lines ...
  Jan 23 11:47:38.152: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 21 lines ...
      Jan 23 11:47:37.772: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc004ae40f0>: {

... skipping 29 lines ...
  Jan 23 11:47:37.966: FAIL: creating role binding webhook-895:webhook to access configMap

  Unexpected error:

      <*url.Error | 0xc0022558f0>: {

... skipping 33 lines ...
  Jan 23 11:47:38.278: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 23 lines ...
    Unexpected error:

        <*url.Error | 0xc0022558f0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":297,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:47:40.803: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc004a18b40>: {

... skipping 41 lines ...
  Jan 23 11:47:41.120: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:47:41.498: FAIL: Couldn't delete ns: "pod-network-test-9346": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-9346": read tcp 172.18.0.3:42330->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-9346", Err:(*net.OpError)(0xc0035c0190)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc003247440, 0x112}, {0xc003e22c08, 0x6ec4cca, 0xc003e22c30})

... skipping 23 lines ...
      Jan 23 11:47:40.803: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc004a18b40>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":297,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:47:43.708: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc003e60fc0>: {

... skipping 41 lines ...
  Jan 23 11:47:44.027: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:47:44.426: FAIL: Couldn't delete ns: "pod-network-test-9744": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-9744": read tcp 172.18.0.3:42366->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-9744", Err:(*net.OpError)(0xc003a19950)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001b8d0e0, 0x112}, {0xc003e22c08, 0x6ec4cca, 0xc003e22c30})

... skipping 23 lines ...
      Jan 23 11:47:43.708: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc003e60fc0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":297,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 22 lines ...
  Jan 23 11:48:07.969: FAIL: Couldn't delete ns: "job-7964": Delete "https://192.168.6.175:6443/api/v1/namespaces/job-7964": read tcp 172.18.0.3:50070->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/job-7964", Err:(*net.OpError)(0xc00397d090)})

... skipping 20 lines ...
    Jan 23 11:48:07.969: Couldn't delete ns: "job-7964": Delete "https://192.168.6.175:6443/api/v1/namespaces/job-7964": read tcp 172.18.0.3:50070->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/job-7964", Err:(*net.OpError)(0xc00397d090)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":12,"skipped":232,"failed":5,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

... skipping 55 lines ...
  {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":13,"skipped":232,"failed":5,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:47:45.587: INFO: Waiting up to 5m0s for pod "downward-api-4ebe4d62-617b-4965-b9f0-32443744affe" in namespace "downward-api-8652" to be "Succeeded or Failed"

... skipping 24 lines ...
  Jan 23 11:48:46.048: INFO: Pod "downward-api-4ebe4d62-617b-4965-b9f0-32443744affe" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":301,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:48:40.379: INFO: Waiting up to 5m0s for pod "security-context-d822cf8f-1511-46f6-890d-188ada01fd1d" in namespace "security-context-9248" to be "Succeeded or Failed"

... skipping 4 lines ...
  Jan 23 11:48:46.445: INFO: Pod "security-context-d822cf8f-1511-46f6-890d-188ada01fd1d" satisfied condition "Succeeded or Failed"

... skipping 7 lines ...
  E0123 11:48:47.582178      17 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:47070->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:48:47.582: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:47070->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 11:48:47.962: FAIL: Couldn't delete ns: "security-context-9248": Delete "https://192.168.6.175:6443/api/v1/namespaces/security-context-9248": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/security-context-9248", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0008c6c00), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00072c8c0, 0xd3}, {0xc00146cc08, 0x6ec4cca, 0xc00146cc30})

... skipping 21 lines ...
    Jan 23 11:48:47.582: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:47070->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":236,"failed":6,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:48:49.335: INFO: Waiting up to 5m0s for pod "security-context-a835a8f7-5baa-445b-98c0-82d0b1feb455" in namespace "security-context-9505" to be "Succeeded or Failed"

... skipping 21 lines ...
  Jan 23 11:49:40.636: INFO: Pod "security-context-a835a8f7-5baa-445b-98c0-82d0b1feb455" satisfied condition "Succeeded or Failed"

... skipping 16 lines ...
  Jan 23 11:48:47.455: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:47064->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  E0123 11:49:12.180049      35 retrywatcher.go:130] "Watch failed" err="Get \"https://192.168.6.175:6443/api/v1/namespaces/pods-6632/pods?allowWatchBookmarks=true&labelSelector=test-pod-static%3Dtrue&resourceVersion=9068&watch=true\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")"

... skipping 27 lines ...
  {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":12,"skipped":307,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":236,"failed":6,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:49:44.164: INFO: Waiting up to 5m0s for pod "pod-947b1120-bf38-4a7c-934d-75cf6abc97f2" in namespace "emptydir-8699" to be "Succeeded or Failed"

... skipping 5 lines ...
  Jan 23 11:49:52.121: INFO: Pod "pod-947b1120-bf38-4a7c-934d-75cf6abc97f2" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":236,"failed":6,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]"]}

... skipping 21 lines ...
  Jan 23 11:49:54.330: FAIL: Couldn't delete ns: "configmap-9913": Delete "https://192.168.6.175:6443/api/v1/namespaces/configmap-9913": read tcp 172.18.0.3:43850->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/configmap-9913", Err:(*net.OpError)(0xc001b9b900)})

... skipping 20 lines ...
    Jan 23 11:49:54.330: Couldn't delete ns: "configmap-9913": Delete "https://192.168.6.175:6443/api/v1/namespaces/configmap-9913": read tcp 172.18.0.3:43850->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/configmap-9913", Err:(*net.OpError)(0xc001b9b900)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":10,"skipped":493,"failed":9,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:48:08.289: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 17 lines ...
  {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":11,"skipped":493,"failed":9,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]"]}

... skipping 24 lines ...
  Jan 23 11:49:57.372: FAIL: Unexpected error:

      <*url.Error | 0xc0021b6390>: {

... skipping 31 lines ...
  Jan 23 11:49:57.629: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:49:57.945: FAIL: Couldn't delete ns: "svcaccounts-428": Delete "https://192.168.6.175:6443/api/v1/namespaces/svcaccounts-428": read tcp 172.18.0.3:48046->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/svcaccounts-428", Err:(*net.OpError)(0xc003a11590)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001fec360, 0x112}, {0xc00146cc08, 0x6ec4cca, 0xc00146cc30})

... skipping 21 lines ...
    Jan 23 11:49:57.372: Unexpected error:

        <*url.Error | 0xc0021b6390>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":12,"skipped":374,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:49:54.586: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 13 lines ...
  Jan 23 11:50:00.182: FAIL: Couldn't delete ns: "configmap-5721": Delete "https://192.168.6.175:6443/api/v1/namespaces/configmap-5721": read tcp 172.18.0.3:48060->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/configmap-5721", Err:(*net.OpError)(0xc0033f0370)})

... skipping 20 lines ...
    Jan 23 11:50:00.183: Couldn't delete ns: "configmap-5721": Delete "https://192.168.6.175:6443/api/v1/namespaces/configmap-5721": read tcp 172.18.0.3:48060->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/configmap-5721", Err:(*net.OpError)(0xc0033f0370)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":15,"skipped":261,"failed":7,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]"]}

... skipping 11 lines ...
  Jan 23 11:50:00.190: FAIL: Unexpected error:

      <*url.Error | 0xc003e6d860>: {

... skipping 31 lines ...
  Jan 23 11:50:00.413: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:50:00.673: FAIL: Couldn't delete ns: "svcaccounts-3595": Delete "https://192.168.6.175:6443/api/v1/namespaces/svcaccounts-3595": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/svcaccounts-3595", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc002327080), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001a30ea0, 0x112}, {0xc00146cc08, 0x6ec4cca, 0xc00146cc30})

... skipping 21 lines ...
    Jan 23 11:50:00.190: Unexpected error:

        <*url.Error | 0xc003e6d860>: {

... skipping 24 lines ...
  Jan 23 11:49:57.246: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:47008->192.168.6.175:6443: read: connection reset by peer

... skipping 6 lines ...
  Jan 23 11:50:00.183: FAIL: Unexpected error:

      <*url.Error | 0xc002aab7d0>: {

... skipping 32 lines ...
  Jan 23 11:50:00.402: INFO: Could not list Deployments in namespace "deployment-6729": Get "https://192.168.6.175:6443/apis/apps/v1/namespaces/deployment-6729/deployments": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  Jan 23 11:50:00.581: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:50:00.852: FAIL: Couldn't delete ns: "deployment-6729": Delete "https://192.168.6.175:6443/api/v1/namespaces/deployment-6729": read tcp 172.18.0.3:48122->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/deployment-6729", Err:(*net.OpError)(0xc001931ef0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001e08360, 0x112}, {0xc0012d0c08, 0x6ec4cca, 0xc0012d0c30})

... skipping 21 lines ...
    Jan 23 11:50:00.183: Unexpected error:

        <*url.Error | 0xc002aab7d0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":12,"skipped":374,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:50:00.400: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 16 lines ...
  {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":13,"skipped":374,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":15,"skipped":261,"failed":8,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:50:00.919: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:48124->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:50:03.396: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:48150->192.168.6.175:6443: read: connection reset by peer

... skipping 20 lines ...
  Jan 23 11:50:09.407: FAIL: Unexpected error:

      <*url.Error | 0xc002153800>: {

... skipping 31 lines ...
  Jan 23 11:50:09.757: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:50:10.039: FAIL: Couldn't delete ns: "svcaccounts-1131": Delete "https://192.168.6.175:6443/api/v1/namespaces/svcaccounts-1131": read tcp 172.18.0.3:46936->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/svcaccounts-1131", Err:(*net.OpError)(0xc002e20d20)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00209a5a0, 0x112}, {0xc00146cc08, 0x6ec4cca, 0xc00146cc30})

... skipping 21 lines ...
    Jan 23 11:50:09.407: Unexpected error:

        <*url.Error | 0xc002153800>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":15,"skipped":261,"failed":9,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:50:06.367: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:48176->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  Jan 23 11:50:09.534: FAIL: Failed to mark config map "immutable" in namespace "configmap-8573" as immutable

  Unexpected error:

      <*url.Error | 0xc004324d20>: {

... skipping 31 lines ...
  Jan 23 11:50:09.913: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 11:50:09.534: Failed to mark config map "immutable" in namespace "configmap-8573" as immutable

    Unexpected error:

        <*url.Error | 0xc004324d20>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":415,"failed":17,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:50:12.521: FAIL: Failed to update config map "immutable" in namespace "configmap-2050"

  Unexpected error:

      <*url.Error | 0xc002f3baa0>: {

... skipping 31 lines ...
  Jan 23 11:50:12.943: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 11:50:12.521: Failed to update config map "immutable" in namespace "configmap-2050"

    Unexpected error:

        <*url.Error | 0xc002f3baa0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":415,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:50:15.451: FAIL: Failed to update config map "immutable" in namespace "configmap-5386"

  Unexpected error:

      <*url.Error | 0xc0039e2a20>: {

... skipping 31 lines ...
  Jan 23 11:50:15.771: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:50:16.194: FAIL: Couldn't delete ns: "configmap-5386": Delete "https://192.168.6.175:6443/api/v1/namespaces/configmap-5386": read tcp 172.18.0.3:47046->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/configmap-5386", Err:(*net.OpError)(0xc002bac910)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001f22480, 0x112}, {0xc0026b4c08, 0x6ec4cca, 0xc0026b4c30})

... skipping 21 lines ...
    Jan 23 11:50:15.451: Failed to update config map "immutable" in namespace "configmap-5386"

    Unexpected error:

        <*url.Error | 0xc0039e2a20>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":415,"failed":19,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:50:17.573: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25503610-5c04-4eb0-93ed-a0f25015c7d6" in namespace "projected-649" to be "Succeeded or Failed"

... skipping 7 lines ...
  Jan 23 11:50:32.678: INFO: Pod "pod-projected-secrets-25503610-5c04-4eb0-93ed-a0f25015c7d6" satisfied condition "Succeeded or Failed"

... skipping 2 lines ...
  Jan 23 11:50:33.640: FAIL: Failed to delete pod "pod-projected-secrets-25503610-5c04-4eb0-93ed-a0f25015c7d6": Delete "https://192.168.6.175:6443/api/v1/namespaces/projected-649/pods/pod-projected-secrets-25503610-5c04-4eb0-93ed-a0f25015c7d6": read tcp 172.18.0.3:57854->192.168.6.175:6443: read: connection reset by peer

... skipping 25 lines ...
  Jan 23 11:50:34.020: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 11:50:33.640: Failed to delete pod "pod-projected-secrets-25503610-5c04-4eb0-93ed-a0f25015c7d6": Delete "https://192.168.6.175:6443/api/v1/namespaces/projected-649/pods/pod-projected-secrets-25503610-5c04-4eb0-93ed-a0f25015c7d6": read tcp 172.18.0.3:57854->192.168.6.175:6443: read: connection reset by peer

... skipping 29 lines ...
  {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":1,"skipped":65,"failed":6,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:51:04.000: FAIL: unable to create test configMap : Post "https://192.168.6.175:6443/api/v1/namespaces/configmap-2550/configmaps": read tcp 172.18.0.3:56996->192.168.6.175:6443: read: connection reset by peer

... skipping 26 lines ...
  {"msg":"FAILED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":11,"skipped":518,"failed":10,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]"]}

... skipping 15 lines ...
  Jan 23 11:51:03.621: FAIL: error in waiting for pods to come up: failed to wait for pods running: [Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-9bqv4": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-5ld7v": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-ff7j2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-6kr8x": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gtwrx": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gklhj": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-q6nkn": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]

  Unexpected error:

      <*errors.errorString | 0xc001f1d8b0>: {
          s: "failed to wait for pods running: [Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-9bqv4\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-5ld7v\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-ff7j2\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-6kr8x\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gtwrx\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gklhj\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-q6nkn\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")]",

      }
      failed to wait for pods running: [Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-9bqv4": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-5ld7v": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-ff7j2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-6kr8x": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gtwrx": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gklhj": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-q6nkn": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]

... skipping 54 lines ...
    Jan 23 11:51:03.621: error in waiting for pods to come up: failed to wait for pods running: [Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-9bqv4": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-5ld7v": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-ff7j2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-6kr8x": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gtwrx": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gklhj": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-q6nkn": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]

    Unexpected error:

        <*errors.errorString | 0xc001f1d8b0>: {
            s: "failed to wait for pods running: [Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-9bqv4\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-5ld7v\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-ff7j2\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-6kr8x\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gtwrx\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gklhj\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\") Get \"https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-q6nkn\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")]",

        }
        failed to wait for pods running: [Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-9bqv4": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-5ld7v": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-ff7j2": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-6kr8x": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gtwrx": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-gklhj": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") Get "https://192.168.6.175:6443/api/v1/namespaces/deployment-188/pods/webserver-deployment-5d9fdcc779-q6nkn": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]

... skipping 4 lines ...
  {"msg":"FAILED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":65,"failed":7,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:51:07.139: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc0039c3b00>: {

... skipping 35 lines ...
  Jan 23 11:51:07.456: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 11:51:07.139: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc0039c3b00>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":11,"skipped":518,"failed":11,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]"]}

... skipping 11 lines ...
  Jan 23 11:51:07.013: FAIL: Unexpected error:

      <*url.Error | 0xc002cb4960>: {

... skipping 32 lines ...
  Jan 23 11:51:07.392: INFO: Could not list Deployments in namespace "deployment-6553": Get "https://192.168.6.175:6443/apis/apps/v1/namespaces/deployment-6553/deployments": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
    Jan 23 11:51:07.014: Unexpected error:

        <*url.Error | 0xc002cb4960>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":11,"skipped":518,"failed":12,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]"]}

... skipping 32 lines ...
  Jan 23 11:51:13.241: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8526 create -f -:\nCommand stdout:\n\nstderr:\nE0123 11:51:13.237044     488 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer\nerror: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8526 create -f -:

... skipping 3 lines ...
      E0123 11:51:13.237044     488 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer

      error: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer

      
      error:

... skipping 25 lines ...
  Jan 23 11:51:13.601: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8526 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: unable to recognize \"STDIN\": Get \"https://192.168.6.175:6443/api?timeout=32s\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8526 delete --grace-period=0 --force -f -:

... skipping 4 lines ...
      error: unable to recognize "STDIN": Get "https://192.168.6.175:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

      
      error:

... skipping 16 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00172e480, 0x47c}, {0xc00137ad68, 0x6ec4cca, 0xc00137ad88})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc00172e000, 0x467}, {0xc003268078, 0xc00172e000, 0x1})

... skipping 41 lines ...
      Jan 23 11:51:13.241: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8526 create -f -:\nCommand stdout:\n\nstderr:\nE0123 11:51:13.237044     488 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer\nerror: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8526 create -f -:

... skipping 3 lines ...
          E0123 11:51:13.237044     488 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer

          error: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48622->192.168.6.175:6443: read: connection reset by peer

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":65,"failed":8,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]"]}

... skipping 21 lines ...
  {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":65,"failed":8,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":437,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:50:35.489: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25407aeb-a6bc-46ec-9972-4d2e979ec1d9" in namespace "projected-2925" to be "Succeeded or Failed"

... skipping 18 lines ...
  Jan 23 11:51:20.547: INFO: Pod "pod-projected-secrets-25407aeb-a6bc-46ec-9972-4d2e979ec1d9" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":437,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":11,"skipped":533,"failed":13,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

... skipping 29 lines ...
  Jan 23 11:51:16.423: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3676 create -f -:\nCommand stdout:\n\nstderr:\nE0123 11:51:16.419459     508 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer\nerror: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3676 create -f -:

... skipping 3 lines ...
      E0123 11:51:16.419459     508 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer

      error: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer

      
      error:

... skipping 25 lines ...
  Jan 23 11:51:25.303: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3676 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: unable to recognize \"STDIN\": Get \"https://192.168.6.175:6443/api?timeout=32s\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3676 delete --grace-period=0 --force -f -:

... skipping 4 lines ...
      error: unable to recognize "STDIN": Get "https://192.168.6.175:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

      
      error:

... skipping 16 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00162f200, 0x47c}, {0xc00137ad68, 0x6ec4cca, 0xc00137ad88})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc00162ed80, 0x467}, {0xc0033502c0, 0xc00162ed80, 0xc004652a80})

... skipping 41 lines ...
      Jan 23 11:51:16.423: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3676 create -f -:\nCommand stdout:\n\nstderr:\nE0123 11:51:16.419459     508 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer\nerror: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3676 create -f -:

... skipping 3 lines ...
          E0123 11:51:16.419459     508 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer

          error: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:48666->192.168.6.175:6443: read: connection reset by peer

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":11,"skipped":533,"failed":14,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

... skipping 29 lines ...
  Jan 23 11:51:28.506: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3417 create -f -:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3417 create -f -:

... skipping 3 lines ...
      Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

      
      error:

... skipping 25 lines ...
  Jan 23 11:51:28.943: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3417 delete --grace-period=0 --force -f -:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when deleting \"STDIN\": Delete \"https://192.168.6.175:6443/api/v1/namespaces/kubectl-3417/services/agnhost-replica\": read tcp 172.18.0.3:38508->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3417 delete --grace-period=0 --force -f -:

... skipping 4 lines ...
      error: error when deleting "STDIN": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-3417/services/agnhost-replica": read tcp 172.18.0.3:38508->192.168.6.175:6443: read: connection reset by peer

      
      error:

... skipping 16 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001906e00, 0x37b}, {0xc00137ad68, 0x6ec4cca, 0xc00137ad88})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc001906a80, 0x366}, {0xc003c54f18, 0xc001906a80, 0xc003f8b820})

... skipping 41 lines ...
      Jan 23 11:51:28.506: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3417 create -f -:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3417 create -f -:

... skipping 3 lines ...
          Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":11,"skipped":533,"failed":15,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

... skipping 15 lines ...
  Jan 23 11:51:31.436: FAIL: creating cluster role binding wardler:aggregator-8070:auth-delegator

  Unexpected error:

      <*url.Error | 0xc003970060>: {

... skipping 45 lines ...
    Unexpected error:

        <*url.Error | 0xc003970060>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":581,"failed":16,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

... skipping 11 lines ...
  Jan 23 11:51:37.346: FAIL: creating cluster role binding wardler:aggregator-7816:auth-delegator

  Unexpected error:

      <*url.Error | 0xc0043c5d10>: {

... skipping 45 lines ...
    Unexpected error:

        <*url.Error | 0xc0043c5d10>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":581,"failed":17,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:51:40.620: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:60180->192.168.6.175:6443: read: connection reset by peer

... skipping 8 lines ...
  Jan 23 11:51:55.551: FAIL: listing flunders using dynamic client

  Unexpected error:

      <*url.Error | 0xc004af2120>: {

... skipping 36 lines ...
  Jan 23 11:51:58.628: FAIL: Couldn't delete ns: "aggregator-7430": Delete "https://192.168.6.175:6443/api/v1/namespaces/aggregator-7430": read tcp 172.18.0.3:51710->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/aggregator-7430", Err:(*net.OpError)(0xc003d5abe0)})

... skipping 21 lines ...
    Unexpected error:

        <*url.Error | 0xc004af2120>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":581,"failed":18,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

... skipping 51 lines ...
  {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":15,"skipped":452,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:51:58.987: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 5 lines ...
  Jan 23 11:52:03.914: INFO: Waiting up to 5m0s for pod "pod-secrets-1762bfae-ea5d-4c42-b6d0-284333452d5f" in namespace "secrets-7098" to be "Succeeded or Failed"

... skipping 10 lines ...
  Jan 23 11:52:22.741: INFO: Pod "pod-secrets-1762bfae-ea5d-4c42-b6d0-284333452d5f" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":582,"failed":18,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

... skipping 27 lines ...
  {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":13,"skipped":666,"failed":18,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

... skipping 40 lines ...
  Jan 23 11:53:14.673: FAIL: Couldn't delete ns: "pods-2272": Delete "https://192.168.6.175:6443/api/v1/namespaces/pods-2272": read tcp 172.18.0.3:43384->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pods-2272", Err:(*net.OpError)(0xc0037aaff0)})

... skipping 20 lines ...
    Jan 23 11:53:14.673: Couldn't delete ns: "pods-2272": Delete "https://192.168.6.175:6443/api/v1/namespaces/pods-2272": read tcp 172.18.0.3:43384->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pods-2272", Err:(*net.OpError)(0xc0037aaff0)})

... skipping 61 lines ...
  Jan 23 11:54:01.697: INFO: Pod "pod-update-activedeadlineseconds-b0a4f2b2-fb91-43e1-9585-48f7b00748fb": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 50.796864185s

... skipping 8 lines ...
  {"msg":"FAILED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":705,"failed":19,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:53:15.050: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 11:53:17.651: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:39562->192.168.6.175:6443: read: connection reset by peer

... skipping 35 lines ...
  {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":705,"failed":19,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":481,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:54:03.027: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00363dc80>: {

... skipping 41 lines ...
  Jan 23 11:54:03.424: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:54:03.708: FAIL: Couldn't delete ns: "emptydir-8184": Delete "https://192.168.6.175:6443/api/v1/namespaces/emptydir-8184": read tcp 172.18.0.3:34560->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/emptydir-8184", Err:(*net.OpError)(0xc00363f450)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001102c60, 0x112}, {0xc0025b8c08, 0x6ec4cca, 0xc0025b8c30})

... skipping 21 lines ...
    Jan 23 11:54:03.027: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc00363dc80>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":481,"failed":21,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:54:05.016: INFO: Waiting up to 5m0s for pod "pod-52b32fc6-6932-474a-b7da-5c5c6f022f0e" in namespace "emptydir-9663" to be "Succeeded or Failed"

... skipping 3 lines ...
  Jan 23 11:54:07.368: INFO: Pod "pod-52b32fc6-6932-474a-b7da-5c5c6f022f0e" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":481,"failed":21,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 30 lines ...
  Jan 23 11:54:09.207: FAIL: Couldn't delete ns: "events-3879": Delete "https://192.168.6.175:6443/api/v1/namespaces/events-3879": read tcp 172.18.0.3:38234->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/events-3879", Err:(*net.OpError)(0xc00397d130)})

... skipping 20 lines ...
    Jan 23 11:54:09.207: Couldn't delete ns: "events-3879": Delete "https://192.168.6.175:6443/api/v1/namespaces/events-3879": read tcp 172.18.0.3:38234->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/events-3879", Err:(*net.OpError)(0xc00397d130)})

... skipping 15 lines ...
  Jan 23 11:54:12.139: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00392b9e0>: {

... skipping 41 lines ...
  Jan 23 11:54:12.455: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:54:13.014: FAIL: Couldn't delete ns: "pod-network-test-2808": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-2808": read tcp 172.18.0.3:38342->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-2808", Err:(*net.OpError)(0xc001b9b900)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001305200, 0x112}, {0xc0025b8c08, 0x6ec4cca, 0xc0025b8c30})

... skipping 23 lines ...
      Jan 23 11:54:12.139: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc00392b9e0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":14,"skipped":709,"failed":20,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:54:09.532: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 14 lines ...
  Jan 23 11:54:15.193: FAIL: failed to update the test event

  Unexpected error:

      <*url.Error | 0xc004f74540>: {

... skipping 31 lines ...
  Jan 23 11:54:15.528: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:54:15.837: FAIL: Couldn't delete ns: "events-2594": Delete "https://192.168.6.175:6443/api/v1/namespaces/events-2594": read tcp 172.18.0.3:38400->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/events-2594", Err:(*net.OpError)(0xc0026e09b0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001c8cb40, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 11:54:15.193: failed to update the test event

    Unexpected error:

        <*url.Error | 0xc004f74540>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":497,"failed":22,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:54:15.338: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc003888480>: {

... skipping 41 lines ...
  Jan 23 11:54:15.591: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 21 lines ...
      Jan 23 11:54:15.338: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc003888480>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":497,"failed":23,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:54:18.209: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00417c3c0>: {

... skipping 41 lines ...
  Jan 23 11:54:18.533: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:54:18.787: FAIL: Couldn't delete ns: "pod-network-test-5902": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-5902": read tcp 172.18.0.3:47704->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-5902", Err:(*net.OpError)(0xc0031624b0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00229c240, 0x112}, {0xc0025b8c08, 0x6ec4cca, 0xc0025b8c30})

... skipping 23 lines ...
      Jan 23 11:54:18.209: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc00417c3c0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":497,"failed":24,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":14,"skipped":709,"failed":21,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]"]}

... skipping 17 lines ...
  Jan 23 11:54:18.144: FAIL: failed to patch the test event

  Unexpected error:

      <*url.Error | 0xc0053449f0>: {

... skipping 31 lines ...
  Jan 23 11:54:18.470: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:54:18.850: FAIL: Couldn't delete ns: "events-7654": Delete "https://192.168.6.175:6443/api/v1/namespaces/events-7654": read tcp 172.18.0.3:47700->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/events-7654", Err:(*net.OpError)(0xc0037c6dc0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0025c7440, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 11:54:18.144: failed to patch the test event

    Unexpected error:

        <*url.Error | 0xc0053449f0>: {

... skipping 21 lines ...
  {"msg":"FAILED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":14,"skipped":709,"failed":22,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]"]}

... skipping 16 lines ...
  Jan 23 11:54:21.255: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-201 create -f -:\nCommand stdout:\n\nstderr:\nE0123 11:54:21.252109     805 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer\nerror: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-201 create -f -:

... skipping 3 lines ...
      E0123 11:54:21.252109     805 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer

      error: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer

      
      error:

... skipping 32 lines ...
      Jan 23 11:54:21.255: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-201 create -f -:\nCommand stdout:\n\nstderr:\nE0123 11:54:21.252109     805 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer\nerror: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-201 create -f -:

... skipping 3 lines ...
          E0123 11:54:21.252109     805 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer

          error: unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:47732->192.168.6.175:6443: read: connection reset by peer

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":14,"skipped":718,"failed":23,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]"]}

... skipping 17 lines ...
  Jan 23 11:54:27.440: FAIL: Missing k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 in kubectl diff output:

  
  error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9014 diff -f -:

... skipping 5 lines ...
  error:

... skipping 28 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9014 diff -f -:

... skipping 5 lines ...
      error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":14,"skipped":718,"failed":24,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:54:30.737: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4335 create -f -:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4335 create -f -:

... skipping 3 lines ...
      Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

      
      error:

... skipping 32 lines ...
      Jan 23 11:54:30.737: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4335 create -f -:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4335 create -f -:

... skipping 3 lines ...
          Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":14,"skipped":718,"failed":25,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]"]}

... skipping 14 lines ...
  Jan 23 11:54:35.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52f1fcbc-bc1b-4553-b795-cee1986560ea" in namespace "downward-api-3517" to be "Succeeded or Failed"

... skipping 11 lines ...
  Jan 23 11:54:56.534: INFO: Pod "downwardapi-volume-52f1fcbc-bc1b-4553-b795-cee1986560ea" satisfied condition "Succeeded or Failed"

... skipping 8 lines ...
  Jan 23 11:54:57.691: FAIL: Couldn't delete ns: "downward-api-3517": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-3517": read tcp 172.18.0.3:34562->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/downward-api-3517", Err:(*net.OpError)(0xc0037c73b0)})

... skipping 20 lines ...
    Jan 23 11:54:57.691: Couldn't delete ns: "downward-api-3517": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-3517": read tcp 172.18.0.3:34562->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/downward-api-3517", Err:(*net.OpError)(0xc0037c73b0)})

... skipping 21 lines ...
  E0123 11:50:27.931107      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-8604/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-nodeport&resourceVersion=9792&timeout=8m42s&timeoutSeconds=522&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 20 lines ...
  Jan 23 11:51:31.566: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8604 exec execpod-affinity4pf47 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.6.120 31583:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/services-8604/pods/execpod-affinity4pf47/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=echo+hostName+%7C+nc+-v+-t+-w+2+192.168.6.120+31583&container=agnhost-container&stderr=true&stdout=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:51:34.618: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8604 exec execpod-affinity4pf47 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 192.168.6.75 31583:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:51:37.530: INFO: Failed to get response from 192.168.6.6:31583. Retry until timeout

... skipping 2 lines ...
  Jan 23 11:52:07.913: INFO: Failed to get response from 192.168.6.6:31583. Retry until timeout

... skipping 2 lines ...
  Jan 23 11:52:38.351: INFO: Failed to get response from 192.168.6.6:31583. Retry until timeout

... skipping 2 lines ...
  Jan 23 11:53:08.419: INFO: Failed to get response from 192.168.6.6:31583. Retry until timeout

... skipping 2 lines ...
  Jan 23 11:54:08.624: INFO: Failed to get response from 192.168.6.6:31583. Retry until timeout

... skipping 23 lines ...
  E0123 11:54:55.074938      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-8604/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-nodeport&resourceVersion=12365&timeout=6m43s&timeoutSeconds=403&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 14 lines ...
  {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":271,"failed":9,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":4,"skipped":176,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]}

... skipping 29 lines ...
  Jan 23 11:47:34.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9666 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/statefulset-9666/pods/ss2-1/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=mv+-v+%2Fusr%2Flocal%2Fapache2%2Fhtdocs%2Findex.html+%2Ftmp%2F+%7C%7C+true&container=webserver&stderr=true&stdout=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 78 lines ...
  Jan 23 11:51:31.501: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9666 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/statefulset-9666/pods/ss2-1/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=mv+-v+%2Ftmp%2Findex.html+%2Fusr%2Flocal%2Fapache2%2Fhtdocs%2F+%7C%7C+true&container=webserver&stderr=true&stdout=true": read tcp 172.18.0.3:38558->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 59 lines ...
  {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":5,"skipped":176,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]}

... skipping 18 lines ...
  Jan 23 11:55:27.566: FAIL: expected to be able to write to subpath

... skipping 50 lines ...
  {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":6,"skipped":225,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":725,"failed":26,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:54:58.111: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 6 lines ...
  Jan 23 11:55:03.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d7da65a-1286-4406-b33c-e3d566e1fccb" in namespace "downward-api-3236" to be "Succeeded or Failed"

... skipping 17 lines ...
  Jan 23 11:55:39.043: INFO: Pod "downwardapi-volume-7d7da65a-1286-4406-b33c-e3d566e1fccb" satisfied condition "Succeeded or Failed"

... skipping 8 lines ...
  Jan 23 11:55:40.309: FAIL: Couldn't delete ns: "downward-api-3236": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-3236": read tcp 172.18.0.3:41790->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/downward-api-3236", Err:(*net.OpError)(0xc00235a410)})

... skipping 20 lines ...
    Jan 23 11:55:40.309: Couldn't delete ns: "downward-api-3236": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-3236": read tcp 172.18.0.3:41790->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/downward-api-3236", Err:(*net.OpError)(0xc00235a410)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":17,"skipped":535,"failed":25,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:55:28.141: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:50004->192.168.6.175:6443: read: connection reset by peer

... skipping 10 lines ...
  W0123 11:55:40.131408      35 http.go:498] Error reading backend response: read tcp 172.18.0.3:41792->192.168.6.175:6443: read: connection reset by peer

  Jan 23 11:55:40.131: FAIL: expected to be able to write to subpath

... skipping 26 lines ...
  {"msg":"FAILED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":17,"skipped":535,"failed":26,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 11:55:43.273: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc003a16db0>: {

... skipping 33 lines ...
  Jan 23 11:55:43.592: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 11:55:43.273: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc003a16db0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":17,"skipped":535,"failed":27,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]"]}

... skipping 15 lines ...
  Jan 23 11:55:46.270: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc0022fadb0>: {

... skipping 41 lines ...
  Jan 23 11:55:46.731: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 21 lines ...
      Jan 23 11:55:46.270: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc0022fadb0>: {

... skipping 24 lines ...
  Jan 23 11:55:15.930: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:51836->192.168.6.175:6443: read: connection reset by peer

... skipping 5 lines ...
  Jan 23 11:55:20.984: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1300e4ba-840a-4f29-9aff-161b47408432" in namespace "projected-8014" to be "Succeeded or Failed"

... skipping 12 lines ...
  Jan 23 11:55:46.015: INFO: Pod "pod-projected-configmaps-1300e4ba-840a-4f29-9aff-161b47408432" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":277,"failed":9,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":572,"failed":28,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:55:49.246: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc002cfb3e0>: {

... skipping 41 lines ...
  Jan 23 11:55:49.602: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 21 lines ...
      Jan 23 11:55:49.246: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc002cfb3e0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":572,"failed":29,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:55:52.362: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc0038fa780>: {

... skipping 41 lines ...
  Jan 23 11:55:52.608: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:55:52.965: FAIL: Couldn't delete ns: "pod-network-test-178": Delete "https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-178": read tcp 172.18.0.3:48722->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pod-network-test-178", Err:(*net.OpError)(0xc0016de000)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00229de60, 0x112}, {0xc0025b8c08, 0x6ec4cca, 0xc0025b8c30})

... skipping 23 lines ...
      Jan 23 11:55:52.362: Error creating Pod

      Unexpected error:

          <*url.Error | 0xc0038fa780>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":572,"failed":30,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

... skipping 14 lines ...
  Jan 23 11:55:36.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a91269c-9358-4f11-865b-7b6df3dbbb3d" in namespace "downward-api-4099" to be "Succeeded or Failed"

... skipping 16 lines ...
  Jan 23 11:56:11.275: INFO: Pod "downwardapi-volume-5a91269c-9358-4f11-865b-7b6df3dbbb3d" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":228,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]}

... skipping 14 lines ...
  E0123 11:56:01.638106      35 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:48728->192.168.6.175:6443: read: connection reset by peer

... skipping 50 lines ...
  {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":18,"skipped":596,"failed":30,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

... skipping 22 lines ...
  {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":8,"skipped":240,"failed":12,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]"]}

... skipping 22 lines ...
  {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":19,"skipped":609,"failed":30,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 11:56:16.673: FAIL: while creating secret

  Unexpected error:

      <*url.Error | 0xc0045424b0>: {

... skipping 31 lines ...
  Jan 23 11:56:17.116: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 22 lines ...
      Unexpected error:

          <*url.Error | 0xc0045424b0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":725,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:55:40.572: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 6 lines ...
  Jan 23 11:55:45.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ee1fa9c-7794-4e58-9c39-9cfb49fa9a04" in namespace "downward-api-9876" to be "Succeeded or Failed"

... skipping 20 lines ...
  Jan 23 11:56:28.451: INFO: Pod "downwardapi-volume-7ee1fa9c-7794-4e58-9c39-9cfb49fa9a04" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":725,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 14 lines ...
  Jan 23 11:56:18.873: INFO: Waiting up to 5m0s for pod "projected-volume-81d5f042-c5cd-4026-846f-19a52595aae3" in namespace "projected-7440" to be "Succeeded or Failed"

... skipping 8 lines ...
  Jan 23 11:56:33.272: INFO: Pod "projected-volume-81d5f042-c5cd-4026-846f-19a52595aae3" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"FAILED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":8,"skipped":256,"failed":13,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:56:19.000: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-62h9" in namespace "subpath-3396" to be "Succeeded or Failed"

... skipping 14 lines ...
  Jan 23 11:56:48.467: INFO: Pod "pod-subpath-test-downwardapi-62h9" satisfied condition "Succeeded or Failed"

... skipping 10 lines ...
  Jan 23 11:56:50.032: FAIL: Couldn't delete ns: "subpath-3396": Delete "https://192.168.6.175:6443/api/v1/namespaces/subpath-3396": read tcp 172.18.0.3:34934->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/subpath-3396", Err:(*net.OpError)(0xc003451ae0)})

... skipping 22 lines ...
      Jan 23 11:56:50.032: Couldn't delete ns: "subpath-3396": Delete "https://192.168.6.175:6443/api/v1/namespaces/subpath-3396": read tcp 172.18.0.3:34934->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/subpath-3396", Err:(*net.OpError)(0xc003451ae0)})

... skipping 3 lines ...
  {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":620,"failed":30,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:56:34.766: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:58690->192.168.6.175:6443: read: connection reset by peer

... skipping 6 lines ...
  Jan 23 11:56:39.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6779a44-9e1a-49c7-807c-59698908bd86" in namespace "downward-api-6253" to be "Succeeded or Failed"

... skipping 14 lines ...
  Jan 23 11:57:10.550: INFO: Pod "downwardapi-volume-a6779a44-9e1a-49c7-807c-59698908bd86" satisfied condition "Succeeded or Failed"

... skipping 2 lines ...
  Jan 23 11:57:11.204: FAIL: Failed to delete pod "downwardapi-volume-a6779a44-9e1a-49c7-807c-59698908bd86": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-6253/pods/downwardapi-volume-a6779a44-9e1a-49c7-807c-59698908bd86": read tcp 172.18.0.3:41104->192.168.6.175:6443: read: connection reset by peer

... skipping 23 lines ...
  Jan 23 11:57:11.521: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:57:11.838: FAIL: Couldn't delete ns: "downward-api-6253": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-6253": read tcp 172.18.0.3:41120->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/downward-api-6253", Err:(*net.OpError)(0xc003269900)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001312b40, 0x112}, {0xc0025b8c08, 0x6ec4cca, 0xc0025b8c30})

... skipping 21 lines ...
    Jan 23 11:57:11.205: Failed to delete pod "downwardapi-volume-a6779a44-9e1a-49c7-807c-59698908bd86": Delete "https://192.168.6.175:6443/api/v1/namespaces/downward-api-6253/pods/downwardapi-volume-a6779a44-9e1a-49c7-807c-59698908bd86": read tcp 172.18.0.3:41104->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":8,"skipped":256,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 11:56:50.354: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 8 lines ...
  Jan 23 11:56:55.795: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-55pz" in namespace "subpath-4696" to be "Succeeded or Failed"

... skipping 14 lines ...
  Jan 23 11:57:29.118: INFO: Pod "pod-subpath-test-downwardapi-55pz" satisfied condition "Succeeded or Failed"

... skipping 13 lines ...
  {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":9,"skipped":256,"failed":14,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]"]}

... skipping 13 lines ...
  Jan 23 11:57:59.623: FAIL: Unexpected error:

      <*url.Error | 0xc00360d440>: {

... skipping 31 lines ...
  Jan 23 11:57:59.980: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 11:58:00.273: FAIL: Couldn't delete ns: "resourcequota-4938": Delete "https://192.168.6.175:6443/api/v1/namespaces/resourcequota-4938": read tcp 172.18.0.3:42020->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/resourcequota-4938", Err:(*net.OpError)(0xc0010cb450)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000df9d40, 0x112}, {0xc00433ac08, 0x6ec4cca, 0xc00433ac30})

... skipping 21 lines ...
    Jan 23 11:57:59.623: Unexpected error:

        <*url.Error | 0xc00360d440>: {

... skipping 40 lines ...
  {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":16,"skipped":728,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 23 lines ...
  {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":17,"skipped":732,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":620,"failed":31,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]"]}

... skipping 11 lines ...
  Jan 23 11:57:13.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31df4a26-3122-4271-93a5-58762ac456bb" in namespace "downward-api-2284" to be "Succeeded or Failed"

... skipping 24 lines ...
  Jan 23 11:58:05.525: INFO: Pod "downwardapi-volume-31df4a26-3122-4271-93a5-58762ac456bb" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":620,"failed":31,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":9,"skipped":259,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]"]}

... skipping 22 lines ...
  {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":10,"skipped":259,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]"]}

... skipping 10 lines ...
  Jan 23 11:58:32.608: INFO: Waiting up to 5m0s for pod "pod-configmaps-98854c4a-591f-4846-8e91-9b8390a983b9" in namespace "configmap-8832" to be "Succeeded or Failed"

... skipping 4 lines ...
  Jan 23 11:58:37.458: INFO: Pod "pod-configmaps-98854c4a-591f-4846-8e91-9b8390a983b9" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":259,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]"]}

... skipping 34 lines ...
  Jan 23 11:57:02.308: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1053 exec execpod-affinityz9phl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:57:05.567: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1053 exec execpod-affinityz9phl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.233.158 80:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 4 lines ...
  Jan 23 11:57:38.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1053 exec execpod-affinityz9phl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.233.158 80:

... skipping 3 lines ...
  error: Timeout occurred

  
  error:

... skipping 4 lines ...
  Jan 23 11:57:38.891: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1053 exec execpod-affinityz9phl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.99.233.158 80:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 7 lines ...
  Jan 23 11:57:41.822: INFO: Failed to get response from 10.99.233.158:80. Retry until timeout

... skipping 2 lines ...
  Jan 23 11:58:12.161: INFO: Failed to get response from 10.99.233.158:80. Retry until timeout

... skipping 20 lines ...
  Jan 23 11:58:45.306: FAIL: failed to delete pod: execpod-affinityz9phl in namespace: services-1053

  Unexpected error:

      <*url.Error | 0xc00431a1e0>: {

... skipping 49 lines ...
    Jan 23 11:58:45.306: failed to delete pod: execpod-affinityz9phl in namespace: services-1053

    Unexpected error:

        <*url.Error | 0xc00431a1e0>: {

... skipping 30 lines ...
  Jan 23 11:58:41.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-233c3431-0109-47df-b6d5-3498d3de46b1" in namespace "projected-9996" to be "Succeeded or Failed"

... skipping 11 lines ...
  Jan 23 11:59:04.965: INFO: Pod "downwardapi-volume-233c3431-0109-47df-b6d5-3498d3de46b1" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":292,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]"]}

... skipping 56 lines ...
  {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":22,"skipped":628,"failed":31,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]"]}

... skipping 8 lines ...
  Jan 23 11:59:09.536: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:51930->192.168.6.175:6443: read: connection reset by peer

... skipping 18 lines ...
  {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":23,"skipped":635,"failed":31,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]"]}

... skipping 26 lines ...
  {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":13,"skipped":300,"failed":15,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]"]}

... skipping 29 lines ...
  Jan 23 11:52:32.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 17 lines ...
  Jan 23 11:53:32.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  error: Timeout occurred

  
  error:

... skipping 8 lines ...
  Jan 23 11:53:45.286: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 17 lines ...
  Jan 23 11:54:00.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/statefulset-2871/pods/ss-0/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=mv+-v+%2Fusr%2Flocal%2Fapache2%2Fhtdocs%2Findex.html+%2Ftmp%2F+%7C%7C+true&container=webserver&stderr=true&stdout=true": read tcp 172.18.0.3:34502->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 8 lines ...
  Jan 23 11:54:12.331: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/statefulset-2871/pods/ss-1/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=mv+-v+%2Fusr%2Flocal%2Fapache2%2Fhtdocs%2Findex.html+%2Ftmp%2F+%7C%7C+true&container=webserver&stderr=true&stdout=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 72 lines ...
  Jan 23 11:54:54.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  
  error:

... skipping 3 lines ...
  Jan 23 11:55:05.631: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:55:17.633: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:55:29.694: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:55:41.777: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:55:53.871: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:56:05.957: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:56:18.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:56:30.328: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:56:42.650: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:56:54.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:57:06.743: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:57:18.943: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:57:30.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:57:43.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:57:55.255: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:58:07.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:58:19.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:58:31.546: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:58:43.649: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:58:55.819: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:59:07.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:59:20.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:59:32.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 3 lines ...
  Jan 23 11:59:44.504: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2871 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

... skipping 3 lines ...
  Error from server (NotFound): pods "ss-1" not found

  
  error:

... skipping 26 lines ...
  {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":3,"skipped":67,"failed":8,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]"]}

... skipping 11 lines ...
  Jan 23 12:00:01.310: FAIL: couldn't delete collection

  Unexpected error:

      <*url.Error | 0xc002bba120>: {

... skipping 31 lines ...
  Jan 23 12:00:01.666: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 20 lines ...
    Unexpected error:

        <*url.Error | 0xc002bba120>: {

... skipping 45 lines ...
  {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":24,"skipped":654,"failed":31,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":13,"skipped":338,"failed":16,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]"]}

... skipping 12 lines ...
  Jan 23 12:00:07.255: FAIL: Couldn't delete ns: "lease-test-4118": Delete "https://192.168.6.175:6443/api/v1/namespaces/lease-test-4118": read tcp 172.18.0.3:34896->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/lease-test-4118", Err:(*net.OpError)(0xc0040c95e0)})

... skipping 20 lines ...
    Jan 23 12:00:07.255: Couldn't delete ns: "lease-test-4118": Delete "https://192.168.6.175:6443/api/v1/namespaces/lease-test-4118": read tcp 172.18.0.3:34896->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/lease-test-4118", Err:(*net.OpError)(0xc0040c95e0)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":13,"skipped":338,"failed":17,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:00:07.636: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  Jan 23 12:00:13.210: FAIL: patching Lease failed

  Unexpected error:

      <*url.Error | 0xc00292cb40>: {

... skipping 31 lines ...
  Jan 23 12:00:13.608: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:00:14.004: FAIL: Couldn't delete ns: "lease-test-4311": Delete "https://192.168.6.175:6443/api/v1/namespaces/lease-test-4311": read tcp 172.18.0.3:49678->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/lease-test-4311", Err:(*net.OpError)(0xc001dfd0e0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001bb4480, 0x112}, {0xc00433ac08, 0x6ec4cca, 0xc00433ac30})

... skipping 21 lines ...
    Jan 23 12:00:13.211: patching Lease failed

    Unexpected error:

        <*url.Error | 0xc00292cb40>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":13,"skipped":338,"failed":18,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]"]}

... skipping 18 lines ...
  {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":69,"failed":8,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]"]}

... skipping 13 lines ...
  Jan 23 12:00:43.604: FAIL: failed to create pod template

  Unexpected error:

      <*url.Error | 0xc001f7d440>: {

... skipping 31 lines ...
  Jan 23 12:00:43.963: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:00:44.301: FAIL: Couldn't delete ns: "podtemplate-7475": Delete "https://192.168.6.175:6443/api/v1/namespaces/podtemplate-7475": read tcp 172.18.0.3:34238->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/podtemplate-7475", Err:(*net.OpError)(0xc0005f4cd0)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0011eab40, 0x112}, {0xc002124c08, 0x6ec4cca, 0xc002124c30})

... skipping 21 lines ...
    Jan 23 12:00:43.604: failed to create pod template

    Unexpected error:

        <*url.Error | 0xc001f7d440>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":4,"skipped":73,"failed":9,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]"]}

... skipping 24 lines ...
  {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":5,"skipped":73,"failed":9,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]"]}

... skipping 22 lines ...
  Jan 23 11:59:50.903: INFO: Observed stateful pod in namespace: statefulset-3187, name: ss-0, uid: 339d4fd9-8781-4276-86fe-798fa425a76a, status phase: Failed. Waiting for statefulset controller to delete.

  Jan 23 11:59:50.909: INFO: Observed stateful pod in namespace: statefulset-3187, name: ss-0, uid: 339d4fd9-8781-4276-86fe-798fa425a76a, status phase: Failed. Waiting for statefulset controller to delete.

... skipping 23 lines ...
  {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":18,"skipped":735,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 25 lines ...
  {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":6,"skipped":74,"failed":9,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]"]}

... skipping 12 lines ...
  Jan 23 12:00:55.954: FAIL: creating CustomResourceDefinition

  Unexpected error:

      <*url.Error | 0xc002b8b7d0>: {

... skipping 31 lines ...
  Jan 23 12:00:56.286: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 22 lines ...
      Unexpected error:

          <*url.Error | 0xc002b8b7d0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":313,"failed":10,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 17 lines ...
  E0123 11:59:00.847711      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-6777/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-clusterip&resourceVersion=15250&timeout=5m11s&timeoutSeconds=311&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 2 lines ...
  E0123 11:59:07.002487      17 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-6777/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-clusterip&resourceVersion=15314&timeout=9m8s&timeoutSeconds=548&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 13 lines ...
  Jan 23 12:00:12.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6777 exec execpod-affinityzcbrg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

... skipping 3 lines ...
  error: Timeout occurred

  
  error:

... skipping 7 lines ...
  Jan 23 12:00:16.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6777 exec execpod-affinityzcbrg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.97.103.241 80:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/services-6777/pods/execpod-affinityzcbrg/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=echo+hostName+%7C+nc+-v+-t+-w+2+10.97.103.241+80&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:49726->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 42 lines ...
  {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":313,"failed":10,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 12 lines ...
  Jan 23 12:00:58.750: FAIL: failed to create headless service: dns-test-service

  Unexpected error:

      <*url.Error | 0xc00314dd40>: {

... skipping 31 lines ...
  Jan 23 12:00:59.161: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:00:59.484: FAIL: Couldn't delete ns: "dns-2529": Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-2529": read tcp 172.18.0.3:59682->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/dns-2529", Err:(*net.OpError)(0xc002465a90)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001ae2360, 0x112}, {0xc0014b0c08, 0x6ec4cca, 0xc0014b0c30})

... skipping 21 lines ...
    Jan 23 12:00:58.750: failed to create headless service: dns-test-service

    Unexpected error:

        <*url.Error | 0xc00314dd40>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":6,"skipped":104,"failed":10,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]"]}

... skipping 16 lines ...
  {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":7,"skipped":104,"failed":10,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]"]}

... skipping 12 lines ...
  Jan 23 12:00:51.219: INFO: Waiting up to 5m0s for pod "pod-77b86efb-760e-4f01-b4f4-2a57c72435af" in namespace "emptydir-3791" to be "Succeeded or Failed"

... skipping 15 lines ...
  Jan 23 12:01:24.793: INFO: Pod "pod-77b86efb-760e-4f01-b4f4-2a57c72435af" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":739,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 8 lines ...
  Jan 23 12:01:26.397: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 5 lines ...
  Jan 23 12:01:31.274: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-8ad4a20b-447d-4f22-8973-f088a3a53155" in namespace "security-context-test-7882" to be "Succeeded or Failed"

... skipping 2 lines ...
  Jan 23 12:01:33.668: INFO: Pod "busybox-privileged-false-8ad4a20b-447d-4f22-8973-f088a3a53155" satisfied condition "Succeeded or Failed"

... skipping 8 lines ...
  {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":784,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 27 lines ...
  {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":21,"skipped":809,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 13 lines ...
  Jan 23 12:01:43.216: INFO: Waiting up to 5m0s for pod "pod-configmaps-529a8d37-dd0a-4e76-9385-51b45f85afb8" in namespace "configmap-1481" to be "Succeeded or Failed"

... skipping 3 lines ...
  Jan 23 12:01:45.477: INFO: Pod "pod-configmaps-529a8d37-dd0a-4e76-9385-51b45f85afb8" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":825,"failed":27,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]"]}

... skipping 48 lines ...
  {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":124,"failed":10,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]"]}

... skipping 24 lines ...
  Jan 23 12:00:36.231: INFO: Waiting up to 5m0s for pod "client-envvars-dc76b01d-dcdf-44b4-a888-ae5303f82178" in namespace "pods-8048" to be "Succeeded or Failed"

... skipping 41 lines ...
  Jan 23 12:02:22.643: INFO: Pod "client-envvars-dc76b01d-dcdf-44b4-a888-ae5303f82178" satisfied condition "Succeeded or Failed"

... skipping 7 lines ...
  E0123 12:02:24.003021      21 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:43580->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:02:24.003: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:43580->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 12:02:24.304: FAIL: Couldn't delete ns: "pods-8048": Delete "https://192.168.6.175:6443/api/v1/namespaces/pods-8048": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/pods-8048", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0007f5600), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000181600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0032d42a0, 0xd3}, {0xc00433ac08, 0x6ec4cca, 0xc00433ac30})

... skipping 21 lines ...
    Jan 23 12:02:24.003: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:43580->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":323,"failed":11,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}

... skipping 29 lines ...
  Jan 23 12:01:58.718: INFO: Lookups using dns-6700/dns-test-d3f0b74b-6b59-4aca-abff-0b66a1b67102 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6700 wheezy_tcp@dns-test-service.dns-6700 wheezy_udp@dns-test-service.dns-6700.svc wheezy_tcp@dns-test-service.dns-6700.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6700 jessie_tcp@dns-test-service.dns-6700 jessie_udp@dns-test-service.dns-6700.svc jessie_tcp@dns-test-service.dns-6700.svc]

... skipping 13 lines ...
  Jan 23 12:02:11.400: INFO: Lookups using dns-6700/dns-test-d3f0b74b-6b59-4aca-abff-0b66a1b67102 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6700 wheezy_tcp@dns-test-service.dns-6700 wheezy_udp@dns-test-service.dns-6700.svc wheezy_tcp@dns-test-service.dns-6700.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6700 jessie_tcp@dns-test-service.dns-6700 jessie_udp@dns-test-service.dns-6700.svc jessie_tcp@dns-test-service.dns-6700.svc]

... skipping 10 lines ...
  Jan 23 12:02:22.963: INFO: Lookups using dns-6700/dns-test-d3f0b74b-6b59-4aca-abff-0b66a1b67102 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6700 wheezy_tcp@dns-test-service.dns-6700 wheezy_udp@dns-test-service.dns-6700.svc wheezy_tcp@dns-test-service.dns-6700.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6700]

... skipping 13 lines ...
  {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":323,"failed":11,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}

... skipping 42 lines ...
  Jan 23 12:02:48.059: FAIL: Couldn't delete ns: "replicaset-7228": Delete "https://192.168.6.175:6443/api/v1/namespaces/replicaset-7228": read tcp 172.18.0.3:33376->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replicaset-7228", Err:(*net.OpError)(0xc0005f4050)})

... skipping 20 lines ...
    Jan 23 12:02:48.059: Couldn't delete ns: "replicaset-7228": Delete "https://192.168.6.175:6443/api/v1/namespaces/replicaset-7228": read tcp 172.18.0.3:33376->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replicaset-7228", Err:(*net.OpError)(0xc0005f4050)})

... skipping 47 lines ...
  {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":20,"skipped":354,"failed":11,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}

... skipping 15 lines ...
  Jan 23 12:03:18.352: FAIL: ginkgo.Failed to create pod dns-1479/dns-test-5b8e50f9-ed38-4481-8e2d-58baa217d083: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-1479/pods": read tcp 172.18.0.3:48234->192.168.6.175:6443: read: connection reset by peer

... skipping 27 lines ...
    Jan 23 12:03:18.352: ginkgo.Failed to create pod dns-1479/dns-test-5b8e50f9-ed38-4481-8e2d-58baa217d083: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-1479/pods": read tcp 172.18.0.3:48234->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":347,"failed":19,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}

... skipping 22 lines ...
  Jan 23 12:02:50.170: INFO: Waiting up to 5m0s for pod "client-envvars-19679168-3882-4da2-8939-81464eebb6f8" in namespace "pods-1223" to be "Succeeded or Failed"

... skipping 14 lines ...
  Jan 23 12:03:22.090: INFO: Pod "client-envvars-19679168-3882-4da2-8939-81464eebb6f8" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":347,"failed":19,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]"]}

... skipping 10 lines ...
  Jan 23 12:03:24.458: FAIL: Did not get a good sample size: []

  Less than two runs succeeded; aborting.
  Not all RC/pod/service trials succeeded: error creating replication controller: failed to create object with non-retriable error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-3953/replicationcontrollers": read tcp 172.18.0.3:48302->192.168.6.175:6443: read: connection reset by peer

... skipping 13 lines ...
  Jan 23 12:03:24.859: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:03:25.283: FAIL: Couldn't delete ns: "svc-latency-3953": Delete "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-3953": read tcp 172.18.0.3:48386->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/svc-latency-3953", Err:(*net.OpError)(0xc002b4f720)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00154e480, 0x112}, {0xc002fecc08, 0x6ec4cca, 0xc002fecc30})

... skipping 23 lines ...
    Not all RC/pod/service trials succeeded: error creating replication controller: failed to create object with non-retriable error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-3953/replicationcontrollers": read tcp 172.18.0.3:48302->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":128,"failed":11,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:02:48.362: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 34 lines ...
  Jan 23 12:03:36.480: FAIL: Couldn't delete ns: "replicaset-4387": Delete "https://192.168.6.175:6443/api/v1/namespaces/replicaset-4387": read tcp 172.18.0.3:42450->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replicaset-4387", Err:(*net.OpError)(0xc0005f5810)})

... skipping 20 lines ...
    Jan 23 12:03:36.480: Couldn't delete ns: "replicaset-4387": Delete "https://192.168.6.175:6443/api/v1/namespaces/replicaset-4387": read tcp 172.18.0.3:42450->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/replicaset-4387", Err:(*net.OpError)(0xc0005f5810)})

... skipping 19 lines ...
  E0123 12:01:56.823673      15 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-7220/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-clusterip-transition&resourceVersion=16785&timeout=8m42s&timeoutSeconds=522&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
  Jan 23 12:03:15.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7220 exec execpod-affinityg5zg7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

... skipping 3 lines ...
  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/services-7220/pods/execpod-affinityg5zg7/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=echo+hostName+%7C+nc+-v+-t+-w+2+affinity-clusterip-transition+80&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:37478->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 8 lines ...
  Jan 23 12:03:21.382: FAIL: Unexpected error:

      <*errors.errorString | 0xc004d846f0>: {
          s: "failed to update Service \"affinity-clusterip-transition\": Put \"https://192.168.6.175:6443/api/v1/namespaces/services-7220/services/affinity-clusterip-transition\": read tcp 172.18.0.3:37424->192.168.6.175:6443: read: connection reset by peer",

      }
      failed to update Service "affinity-clusterip-transition": Put "https://192.168.6.175:6443/api/v1/namespaces/services-7220/services/affinity-clusterip-transition": read tcp 172.18.0.3:37424->192.168.6.175:6443: read: connection reset by peer

... skipping 18 lines ...
  Jan 23 12:03:21.771: FAIL: failed to delete pod: execpod-affinityg5zg7 in namespace: services-7220

  Unexpected error:

      <*url.Error | 0xc005037380>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Delete "https://192.168.6.175:6443/api/v1/namespaces/services-7220/pods/execpod-affinityg5zg7": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 7 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00284a280, 0x259}, {0xc0012ccde0, 0x6ec4cca, 0xc0012cce00})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc00284a000, 0x244}, {0xc0043c0bd8, 0xc00284a000, 0xc002374000})

... skipping 40 lines ...
    Jan 23 12:03:21.382: Unexpected error:

        <*errors.errorString | 0xc004d846f0>: {
            s: "failed to update Service \"affinity-clusterip-transition\": Put \"https://192.168.6.175:6443/api/v1/namespaces/services-7220/services/affinity-clusterip-transition\": read tcp 172.18.0.3:37424->192.168.6.175:6443: read: connection reset by peer",

        }
        failed to update Service "affinity-clusterip-transition": Put "https://192.168.6.175:6443/api/v1/namespaces/services-7220/services/affinity-clusterip-transition": read tcp 172.18.0.3:37424->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  {"msg":"FAILED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":128,"failed":12,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:03:36.885: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
  {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":9,"skipped":128,"failed":12,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":14,"skipped":347,"failed":20,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]"]}

... skipping 52 lines ...
  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.763: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:48.764: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:48404->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:49.023: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.026: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.027: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.028: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:03:49.028375      21 reflector.go:138] k8s.io/kubernetes/test/e2e/network/service_latency.go:327: Failed to watch *v1.Endpoints: Get "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/endpoints?allowWatchBookmarks=true&resourceVersion=17660&timeout=5m18s&timeoutSeconds=318&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.030: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.032: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.033: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.033: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.035: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.037: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.037: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.040: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.084: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.088: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.093: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.171: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.173: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.173: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.187: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.188: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.190: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.191: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.217: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.237: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.237: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.238: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.239: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.239: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.242: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.256: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.326: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.326: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.344: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.373: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.374: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.391: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.391: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.393: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.396: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.396: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.396: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.398: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.403: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.443: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.443: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:03:49.527: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39334->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:49.527: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39332->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:49.527: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39336->192.168.6.175:6443: read: connection reset by peer

... skipping 158 lines ...
  Jan 23 12:03:52.418: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:03:52.419: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-8380/services": read tcp 172.18.0.3:39344->192.168.6.175:6443: read: connection reset by peer

... skipping 74 lines ...
  Jan 23 12:03:54.002: FAIL: Not all RC/pod/service trials succeeded: error ratio 0.39 is higher than the acceptable ratio 0.05

... skipping 23 lines ...
    Jan 23 12:03:54.002: Not all RC/pod/service trials succeeded: error ratio 0.39 is higher than the acceptable ratio 0.05

... skipping 15 lines ...
  Jan 23 12:03:53.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fbf51c8-a595-48a0-90e9-57522e4939da" in namespace "downward-api-1450" to be "Succeeded or Failed"

... skipping 13 lines ...
  Jan 23 12:04:26.030: INFO: Pod "downwardapi-volume-9fbf51c8-a595-48a0-90e9-57522e4939da" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":148,"failed":12,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]"]}

... skipping 8 lines ...
  Jan 23 12:00:04.108: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:34814->192.168.6.175:6443: read: connection reset by peer

... skipping 13 lines ...
  Jan 23 12:04:41.049: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 12:04:41.049: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":14,"skipped":347,"failed":21,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]"]}

... skipping 48 lines ...
  E0123 12:04:34.803465      21 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/pods?allowWatchBookmarks=true&labelSelector=name%3Dsvc-latency-rc&resourceVersion=19086&timeout=9m40s&timeoutSeconds=580&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 80 lines ...
  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.463: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:59702->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:40.725: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.726: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.728: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.731: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.731: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.731: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.733: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:04:40.735986      21 reflector.go:138] k8s.io/kubernetes/test/e2e/network/service_latency.go:327: Failed to watch *v1.Endpoints: Get "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/endpoints?allowWatchBookmarks=true&resourceVersion=19270&timeout=5m58s&timeoutSeconds=358&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.736: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.736: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.799: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.906: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.906: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.909: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.909: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.912: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.913: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.920: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.921: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:40.973: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.044: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.048: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.051: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.052: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.053: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.053: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:41.109: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45446->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:41.116: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45450->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:41.116: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45452->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:41.116: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45456->192.168.6.175:6443: read: connection reset by peer

... skipping 113 lines ...
  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.498: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.499: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.499: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.499: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.499: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45460->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:43.644: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.644: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.646: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.649: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:04:43.650030      21 reflector.go:138] k8s.io/kubernetes/test/e2e/network/service_latency.go:327: Failed to watch *v1.Endpoints: Get "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/endpoints?allowWatchBookmarks=true&resourceVersion=19552&timeout=6m12s&timeoutSeconds=372&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.650: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.651: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.654: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.655: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.655: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.657: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.692: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.698: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.698: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.699: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.700: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.789: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.792: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.792: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.794: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.800: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.824: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.852: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.853: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.854: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.855: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.857: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.857: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.860: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.866: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.950: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.955: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.990: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.997: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:43.998: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.000: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.001: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.003: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.005: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.005: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.007: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.014: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.023: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.067: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:04:44.139: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45596->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.139: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45598->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.143: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45612->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.143: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45614->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.143: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45606->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.143: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45604->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.143: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45608->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.143: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45610->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:04:44.272: INFO: Got error: Post "https://192.168.6.175:6443/api/v1/namespaces/svc-latency-4943/services": read tcp 172.18.0.3:45616->192.168.6.175:6443: read: connection reset by peer

... skipping 8 lines ...
  Jan 23 12:04:47.869: FAIL: Did not get a good sample size: [96.788261ms 107.41314ms 128.225487ms 139.519316ms 148.995882ms 152.215639ms 162.786031ms 173.796079ms 177.797928ms 192.22322ms 192.813723ms 192.96902ms 195.653208ms 195.857599ms 211.337503ms 211.992153ms 213.175776ms 231.985276ms 233.289138ms 233.325251ms 233.977762ms 239.427341ms 242.088692ms 251.287364ms 251.308857ms 261.87774ms 262.651657ms 262.974556ms 267.628833ms 268.860787ms 275.072191ms 277.594331ms 278.489911ms 279.227606ms 279.576332ms 281.480193ms 281.506657ms 281.98895ms 286.477749ms 288.209651ms 288.830605ms 292.943722ms 296.355324ms 296.816868ms 297.867296ms 298.381267ms 298.85344ms 300.264519ms 300.567437ms 300.849039ms 301.35802ms 302.404489ms 302.9852ms 305.066765ms 306.716756ms 309.143585ms 310.443221ms 317.407881ms 319.058551ms 320.416229ms 325.524706ms 325.969775ms 331.127884ms 332.789154ms 336.828169ms 337.69661ms 338.534189ms 339.710327ms 340.717716ms 342.081078ms 350.703116ms 360.664078ms 364.093876ms 370.92807ms 380.574947ms 426.209878ms 446.55919ms 452.341202ms 1.04320269s 1.043234009s 1.043625094s 1.050381915s 1.106225046s 1.106367295s 1.106910753s 1.10842181s 1.110789064s 1.115259647s 2.086815449s 2.134383574s 2.173063874s 2.173133417s 2.17319364s 4.214690519s]

  Not all RC/pod/service trials succeeded: error ratio 0.53 is higher than the acceptable ratio 0.05

... skipping 24 lines ...
    Not all RC/pod/service trials succeeded: error ratio 0.53 is higher than the acceptable ratio 0.05

... skipping 4 lines ...
  {"msg":"FAILED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":14,"skipped":347,"failed":22,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]"]}

... skipping 12 lines ...
  Jan 23 12:04:49.713: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc004a58330>: {

... skipping 39 lines ...
  Jan 23 12:04:50.061: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:04:50.375: FAIL: Couldn't delete ns: "containers-5579": Delete "https://192.168.6.175:6443/api/v1/namespaces/containers-5579": read tcp 172.18.0.3:49682->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/containers-5579", Err:(*net.OpError)(0xc0047dcf00)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004332360, 0x112}, {0xc002fecc08, 0x6ec4cca, 0xc002fecc30})

... skipping 21 lines ...
    Jan 23 12:04:49.714: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc004a58330>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":20,"skipped":354,"failed":12,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

... skipping 25 lines ...
  Jan 23 12:04:26.719: INFO: Lookups using dns-9212/dns-test-4f2b6284-5f6e-448d-8dd7-a65ed046121d failed for: [wheezy_udp@dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@dns-test-service.dns-9212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local jessie_udp@dns-test-service.dns-9212.svc.cluster.local jessie_tcp@dns-test-service.dns-9212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local]

... skipping 9 lines ...
  Jan 23 12:04:39.071: INFO: Lookups using dns-9212/dns-test-4f2b6284-5f6e-448d-8dd7-a65ed046121d failed for: [wheezy_udp@dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@dns-test-service.dns-9212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local jessie_udp@dns-test-service.dns-9212.svc.cluster.local jessie_tcp@dns-test-service.dns-9212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local]

... skipping 9 lines ...
  Jan 23 12:04:45.204: INFO: Lookups using dns-9212/dns-test-4f2b6284-5f6e-448d-8dd7-a65ed046121d failed for: [wheezy_udp@dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@dns-test-service.dns-9212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local jessie_udp@dns-test-service.dns-9212.svc.cluster.local jessie_tcp@dns-test-service.dns-9212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local]

... skipping 5 lines ...
  Jan 23 12:04:54.928: INFO: Lookups using dns-9212/dns-test-4f2b6284-5f6e-448d-8dd7-a65ed046121d failed for: [wheezy_udp@dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@dns-test-service.dns-9212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9212.svc.cluster.local]

... skipping 9 lines ...
  Jan 23 12:05:02.144: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:05:02.484: FAIL: Couldn't delete ns: "dns-9212": Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-9212": read tcp 172.18.0.3:51810->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/dns-9212", Err:(*net.OpError)(0xc00431db30)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004c5aa20, 0x112}, {0xc0014b0c08, 0x6ec4cca, 0xc0014b0c30})

... skipping 21 lines ...
    Jan 23 12:05:02.144: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":20,"skipped":354,"failed":13,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

... skipping 25 lines ...
  Jan 23 12:05:18.336: INFO: Lookups using dns-3153/dns-test-9bce5636-8440-450d-a66b-8b9dcc0528ca failed for: [wheezy_udp@dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@dns-test-service.dns-3153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local jessie_udp@dns-test-service.dns-3153.svc.cluster.local jessie_tcp@dns-test-service.dns-3153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local]

... skipping 9 lines ...
  Jan 23 12:05:30.362: INFO: Lookups using dns-3153/dns-test-9bce5636-8440-450d-a66b-8b9dcc0528ca failed for: [wheezy_udp@dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@dns-test-service.dns-3153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local jessie_udp@dns-test-service.dns-3153.svc.cluster.local jessie_tcp@dns-test-service.dns-3153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local]

... skipping 9 lines ...
  Jan 23 12:05:37.536: INFO: Lookups using dns-3153/dns-test-9bce5636-8440-450d-a66b-8b9dcc0528ca failed for: [wheezy_udp@dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@dns-test-service.dns-3153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local jessie_udp@dns-test-service.dns-3153.svc.cluster.local jessie_tcp@dns-test-service.dns-3153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local]

... skipping 5 lines ...
  Jan 23 12:05:45.793: INFO: Lookups using dns-3153/dns-test-9bce5636-8440-450d-a66b-8b9dcc0528ca failed for: [wheezy_udp@dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@dns-test-service.dns-3153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3153.svc.cluster.local]

... skipping 9 lines ...
  Jan 23 12:05:53.973: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 19 lines ...
    Jan 23 12:05:53.973: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":20,"skipped":354,"failed":14,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

... skipping 13 lines ...
  Jan 23 12:05:55.646: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b0e20de4-4ac8-494b-9f87-a649ffb3b3c8" in namespace "security-context-test-7434" to be "Succeeded or Failed"

... skipping 5 lines ...
  Jan 23 12:06:07.086: INFO: Pod "busybox-user-65534-b0e20de4-4ac8-494b-9f87-a649ffb3b3c8" satisfied condition "Succeeded or Failed"

... skipping 7 lines ...
  {"msg":"FAILED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":359,"failed":23,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 12:04:51.668: INFO: Waiting up to 5m0s for pod "client-containers-302d640e-fac0-4ca6-b340-e889c3a7977c" in namespace "containers-4049" to be "Succeeded or Failed"

... skipping 32 lines ...
  Jan 23 12:06:21.920: INFO: Pod "client-containers-302d640e-fac0-4ca6-b340-e889c3a7977c" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":359,"failed":23,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]"]}

... skipping 8 lines ...
  Jan 23 12:06:23.766: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 21 lines ...
  {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":16,"skipped":433,"failed":23,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":377,"failed":14,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

... skipping 10 lines ...
  Jan 23 12:06:10.043: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3175de23-a72b-4e76-a5ea-9b84ed08764d" in namespace "projected-751" to be "Succeeded or Failed"

... skipping 18 lines ...
  Jan 23 12:06:58.514: INFO: Pod "pod-projected-secrets-3175de23-a72b-4e76-a5ea-9b84ed08764d" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":377,"failed":14,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":828,"failed":28,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 35 lines ...
  Jan 23 12:04:49.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4313 exec execpod-affinityvmcl8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

... skipping 3 lines ...
  W0123 12:04:49.414900    1302 http.go:498] Error reading backend response: read tcp 172.18.0.3:49648->192.168.6.175:6443: read: connection reset by peer

  error: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/services-4313/pods/execpod-affinityvmcl8/exec?command=%2Fbin%2Fsh&command=-x&command=-c&command=echo+hostName+%7C+nc+-v+-t+-w+2+affinity-clusterip-transition+80&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:49648->192.168.6.175:6443: read: connection reset by peer

  
  error:

... skipping 4 lines ...
  Jan 23 12:05:22.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4313 exec execpod-affinityvmcl8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:

... skipping 3 lines ...
  error: Timeout occurred

  
  error:

... skipping 10 lines ...
  Jan 23 12:06:01.782: INFO: Failed to get response from 10.101.249.112:80. Retry until timeout

... skipping 19 lines ...
  Jan 23 12:06:36.237: FAIL: Unexpected error:

      <*errors.errorString | 0xc0043b6460>: {
          s: "failed to get Service \"affinity-clusterip-transition\": Get \"https://192.168.6.175:6443/api/v1/namespaces/services-4313/services/affinity-clusterip-transition\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")",

      }
      failed to get Service "affinity-clusterip-transition": Get "https://192.168.6.175:6443/api/v1/namespaces/services-4313/services/affinity-clusterip-transition": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 18 lines ...
  Jan 23 12:06:36.592: FAIL: failed to delete pod: execpod-affinityvmcl8 in namespace: services-4313

  Unexpected error:

      <*url.Error | 0xc00124ad20>: {

... skipping 22 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00029ca80, 0x30d}, {0xc0012ccde0, 0x6ec4cca, 0xc0012cce00})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc005150600, 0x2f8}, {0xc00312a440, 0xc005150600, 0xc003d26b60})

... skipping 40 lines ...
    Jan 23 12:06:36.237: Unexpected error:

        <*errors.errorString | 0xc0043b6460>: {
            s: "failed to get Service \"affinity-clusterip-transition\": Get \"https://192.168.6.175:6443/api/v1/namespaces/services-4313/services/affinity-clusterip-transition\": x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")",

        }
        failed to get Service "affinity-clusterip-transition": Get "https://192.168.6.175:6443/api/v1/namespaces/services-4313/services/affinity-clusterip-transition": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 27 lines ...
  Jan 23 12:07:29.660: FAIL: failed to run command '/agnhost dns-server-list' on pod, stdout: , stderr: , err: Timeout occurred

  Unexpected error:

... skipping 23 lines ...
  E0123 12:07:30.306241      21 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:46402->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:07:30.306: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:46402->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 12:07:30.563: FAIL: Couldn't delete ns: "dns-4901": Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-4901": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/dns-4901", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0003a2680), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000181600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0032d41c0, 0xd3}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:07:29.660: failed to run command '/agnhost dns-server-list' on pod, stdout: , stderr: , err: Timeout occurred

    Unexpected error:

... skipping 8 lines ...
  {"msg":"FAILED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":16,"skipped":448,"failed":24,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:07:30.940: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:07:33.324: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:36038->192.168.6.175:6443: read: connection reset by peer

... skipping 13 lines ...
  Jan 23 12:07:42.297: FAIL: failed to run command '/agnhost dns-suffix' on pod, stdout: , stderr: , err: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:35776->192.168.6.175:6443: read: connection reset by peer

  Unexpected error:

      <*errors.errorString | 0xc00372ff70>: {
          s: "error sending request: Post \"https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true\": read tcp 172.18.0.3:35776->192.168.6.175:6443: read: connection reset by peer",

      }
      error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:35776->192.168.6.175:6443: read: connection reset by peer

... skipping 16 lines ...
  Jan 23 12:07:42.424: FAIL: ginkgo.Failed to delete pod test-dns-nameservers: Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers": read tcp 172.18.0.3:35762->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000d3ad80, 0x46a}, {0xc002b62d38, 0x6ec4cca, 0xc002b62d58})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc000d3a900, 0x455}, {0xc0044fabd8, 0xc000d42300, 0x0})

... skipping 24 lines ...
  Jan 23 12:07:42.678: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:07:43.140: FAIL: Couldn't delete ns: "dns-1976": Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-1976": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/dns-1976", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc003e0b180), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000181600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002bf46c0, 0x112}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:07:42.297: failed to run command '/agnhost dns-suffix' on pod, stdout: , stderr: , err: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:35776->192.168.6.175:6443: read: connection reset by peer

    Unexpected error:

        <*errors.errorString | 0xc00372ff70>: {
            s: "error sending request: Post \"https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true\": read tcp 172.18.0.3:35776->192.168.6.175:6443: read: connection reset by peer",

        }
        error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-1976/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:35776->192.168.6.175:6443: read: connection reset by peer

... skipping 12 lines ...
  [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

... skipping 10 lines ...
  {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":23,"skipped":384,"failed":14,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]"]}

... skipping 14 lines ...
  Jan 23 12:07:48.478: FAIL: Unexpected error:

      <*errors.errorString | 0xc00435dc50>: {
          s: "failed to create TCP Service \"nodeport-service\": Post \"https://192.168.6.175:6443/api/v1/namespaces/services-2803/services\": read tcp 172.18.0.3:46466->192.168.6.175:6443: read: connection reset by peer",

      }
      failed to create TCP Service "nodeport-service": Post "https://192.168.6.175:6443/api/v1/namespaces/services-2803/services": read tcp 172.18.0.3:46466->192.168.6.175:6443: read: connection reset by peer

... skipping 16 lines ...
  Jan 23 12:07:48.859: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:07:49.302: FAIL: Couldn't delete ns: "services-2803": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-2803": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/services-2803", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc001f8a100), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0012a3200, 0x112}, {0xc0027f8c08, 0x6ec4cca, 0xc0027f8c30})

... skipping 23 lines ...
    Jan 23 12:07:48.478: Unexpected error:

        <*errors.errorString | 0xc00435dc50>: {
            s: "failed to create TCP Service \"nodeport-service\": Post \"https://192.168.6.175:6443/api/v1/namespaces/services-2803/services\": read tcp 172.18.0.3:46466->192.168.6.175:6443: read: connection reset by peer",

        }
        failed to create TCP Service "nodeport-service": Post "https://192.168.6.175:6443/api/v1/namespaces/services-2803/services": read tcp 172.18.0.3:46466->192.168.6.175:6443: read: connection reset by peer

... skipping 119 lines ...
  W0123 12:07:54.434333      18 http.go:498] Error reading backend response: read tcp 172.18.0.3:53424->192.168.6.175:6443: read: connection reset by peer

... skipping 24 lines ...
  Jan 23 12:07:57.719: FAIL: Failed to connect to exposed host ports

... skipping 22 lines ...
    Jan 23 12:07:57.719: Failed to connect to exposed host ports

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":16,"skipped":448,"failed":25,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]"]}

... skipping 39 lines ...
  Jan 23 12:08:30.886: FAIL: failed to run command '/agnhost dns-suffix' on pod, stdout: , stderr: , err: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:33256->192.168.6.175:6443: read: connection reset by peer

  Unexpected error:

      <*errors.errorString | 0xc00092f900>: {
          s: "error sending request: Post \"https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true\": read tcp 172.18.0.3:33256->192.168.6.175:6443: read: connection reset by peer",

      }
      error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:33256->192.168.6.175:6443: read: connection reset by peer

... skipping 16 lines ...
  Jan 23 12:08:31.014: FAIL: ginkgo.Failed to delete pod test-dns-nameservers: Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers": read tcp 172.18.0.3:33228->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001ac2900, 0x46a}, {0xc002b62d38, 0x6ec4cca, 0xc002b62d58})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc001ac2480, 0x455}, {0xc004c7e668, 0xc00415b800, 0x0})

... skipping 24 lines ...
  Jan 23 12:08:31.372: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:08:31.831: FAIL: Couldn't delete ns: "dns-6084": Delete "https://192.168.6.175:6443/api/v1/namespaces/dns-6084": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/dns-6084", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc001c0e580), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000181600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc001ad05a0, 0x112}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:08:30.886: failed to run command '/agnhost dns-suffix' on pod, stdout: , stderr: , err: error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:33256->192.168.6.175:6443: read: connection reset by peer

    Unexpected error:

        <*errors.errorString | 0xc00092f900>: {
            s: "error sending request: Post \"https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true\": read tcp 172.18.0.3:33256->192.168.6.175:6443: read: connection reset by peer",

        }
        error sending request: Post "https://192.168.6.175:6443/api/v1/namespaces/dns-6084/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true": read tcp 172.18.0.3:33256->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  {"msg":"FAILED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":16,"skipped":448,"failed":26,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":828,"failed":29,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 23 lines ...
  E0123 12:07:33.684395      15 reflector.go:138] k8s.io/kubernetes/test/utils/pod_store.go:57: Failed to watch *v1.Pod: Get "https://192.168.6.175:6443/api/v1/namespaces/services-3414/pods?allowWatchBookmarks=true&labelSelector=name%3Daffinity-clusterip-transition&resourceVersion=21110&timeout=8m10s&timeoutSeconds=490&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 10 lines ...
  Jan 23 12:08:30.435: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3414 exec execpod-affinityv9m6q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.97.231.146 80:

... skipping 3 lines ...
  error: Timeout occurred

  
  error:

... skipping 5 lines ...
  Jan 23 12:08:34.028: FAIL: Unexpected error:

      <*errors.errorString | 0xc003d6a320>: {
          s: "failed to update Service \"affinity-clusterip-transition\": Put \"https://192.168.6.175:6443/api/v1/namespaces/services-3414/services/affinity-clusterip-transition\": read tcp 172.18.0.3:53380->192.168.6.175:6443: read: connection reset by peer",

      }
      failed to update Service "affinity-clusterip-transition": Put "https://192.168.6.175:6443/api/v1/namespaces/services-3414/services/affinity-clusterip-transition": read tcp 172.18.0.3:53380->192.168.6.175:6443: read: connection reset by peer

... skipping 18 lines ...
  Jan 23 12:08:34.377: FAIL: failed to delete pod: execpod-affinityv9m6q in namespace: services-3414

  Unexpected error:

      <*url.Error | 0xc003bd0db0>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Delete "https://192.168.6.175:6443/api/v1/namespaces/services-3414/pods/execpod-affinityv9m6q": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 7 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0008c0a00, 0x259}, {0xc0012ccde0, 0x6ec4cca, 0xc0012cce00})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008c0500, 0x244}, {0xc003d022b8, 0xc0008c0500, 0xc0042d1380})

... skipping 28 lines ...
  Jan 23 12:08:37.038: FAIL: Couldn't delete ns: "services-3414": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-3414": read tcp 172.18.0.3:33348->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/services-3414", Err:(*net.OpError)(0xc0034c4140)})

... skipping 22 lines ...
    Jan 23 12:08:34.028: Unexpected error:

        <*errors.errorString | 0xc003d6a320>: {
            s: "failed to update Service \"affinity-clusterip-transition\": Put \"https://192.168.6.175:6443/api/v1/namespaces/services-3414/services/affinity-clusterip-transition\": read tcp 172.18.0.3:53380->192.168.6.175:6443: read: connection reset by peer",

        }
        failed to update Service "affinity-clusterip-transition": Put "https://192.168.6.175:6443/api/v1/namespaces/services-3414/services/affinity-clusterip-transition": read tcp 172.18.0.3:53380->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":828,"failed":30,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]}

... skipping 13 lines ...
  Jan 23 12:08:33.480: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1a7b8620-bb8c-4370-8d19-f74fd3b17f59" in namespace "projected-713" to be "Succeeded or Failed"

... skipping 4 lines ...
  Jan 23 12:08:38.294: INFO: Pod "pod-projected-secrets-1a7b8620-bb8c-4370-8d19-f74fd3b17f59" satisfied condition "Succeeded or Failed"

... skipping 11 lines ...
  {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":458,"failed":26,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]"]}

... skipping 8 lines ...
  Jan 23 12:08:37.437: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:08:39.870: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:55404->192.168.6.175:6443: read: connection reset by peer

... skipping 4 lines ...
  Jan 23 12:08:43.174: FAIL: Failed to create replication controller: Post "https://192.168.6.175:6443/api/v1/namespaces/gc-7704/replicationcontrollers": read tcp 172.18.0.3:55450->192.168.6.175:6443: read: connection reset by peer

... skipping 13 lines ...
  Jan 23 12:08:43.550: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:08:43.947: FAIL: Couldn't delete ns: "gc-7704": Delete "https://192.168.6.175:6443/api/v1/namespaces/gc-7704": read tcp 172.18.0.3:55470->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/gc-7704", Err:(*net.OpError)(0xc003137220)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000818480, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 12:08:43.174: Failed to create replication controller: Post "https://192.168.6.175:6443/api/v1/namespaces/gc-7704/replicationcontrollers": read tcp 172.18.0.3:55450->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":22,"skipped":868,"failed":31,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]"]}

... skipping 10 lines ...
  Jan 23 12:08:45.933: FAIL: Failed to create replication controller: Post "https://192.168.6.175:6443/api/v1/namespaces/gc-947/replicationcontrollers": read tcp 172.18.0.3:55472->192.168.6.175:6443: read: connection reset by peer

... skipping 13 lines ...
  Jan 23 12:08:46.305: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:08:46.757: FAIL: Couldn't delete ns: "gc-947": Delete "https://192.168.6.175:6443/api/v1/namespaces/gc-947": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/gc-947", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc00049db80), hintErr:(*errors.errorString)(0xc0001184a0), hintCert:(*x509.Certificate)(0xc000531600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00128a000, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 12:08:45.933: Failed to create replication controller: Post "https://192.168.6.175:6443/api/v1/namespaces/gc-947/replicationcontrollers": read tcp 172.18.0.3:55472->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":655,"failed":32,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","[sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]"]}

... skipping 27 lines ...
  {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":655,"failed":32,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","[sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":22,"skipped":868,"failed":32,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:08:47.228: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:60356->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:08:49.559: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 6 lines ...
  Jan 23 12:09:01.262: FAIL: failed to apply to pod simpletest-rc-to-be-deleted-5prkg in namespace gc-1431, a strategic merge patch: {"metadata":{"ownerReferences":[{"apiVersion":"v1","kind":"ReplicationController","name":"simpletest-rc-to-stay","uid":"ad8f13c1-1390-4213-8c69-fbb9fb791d7c"}]}}

  Unexpected error:

      <*url.Error | 0xc002b850e0>: {

... skipping 31 lines ...
  Jan 23 12:09:01.516: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:09:01.966: FAIL: Couldn't delete ns: "gc-1431": Delete "https://192.168.6.175:6443/api/v1/namespaces/gc-1431": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/gc-1431", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc000f70c00), hintErr:(*errors.errorString)(0xc0001184a0), hintCert:(*x509.Certificate)(0xc000531600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00078a900, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 12:09:01.262: failed to apply to pod simpletest-rc-to-be-deleted-5prkg in namespace gc-1431, a strategic merge patch: {"metadata":{"ownerReferences":[{"apiVersion":"v1","kind":"ReplicationController","name":"simpletest-rc-to-stay","uid":"ad8f13c1-1390-4213-8c69-fbb9fb791d7c"}]}}

    Unexpected error:

        <*url.Error | 0xc002b850e0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":22,"skipped":868,"failed":33,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:09:02.392: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:33184->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:09:04.728: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 5 lines ...
  Jan 23 12:09:09.391: INFO: Waiting up to 5m0s for pod "pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680" in namespace "secrets-9388" to be "Succeeded or Failed"

... skipping 13 lines ...
  Jan 23 12:09:46.134: INFO: Pod "pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680" satisfied condition "Succeeded or Failed"

  Jan 23 12:09:46.313: INFO: Trying to get logs from node k8s-conformance-8hxc51-md-0-75bfdd6df6-7j47k pod pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680 container secret-volume-test: <nil>
  Jan 23 12:09:47.583: INFO: Failed to get logs from node "k8s-conformance-8hxc51-md-0-75bfdd6df6-7j47k" pod "pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680" container "secret-volume-test". an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680)

... skipping 3 lines ...
  Jan 23 12:09:47.958: FAIL: Unexpected error:

      <*errors.errorString | 0xc001844640>: {
          s: "failed to get logs from pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680)",

      }
      failed to get logs from pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680)

... skipping 31 lines ...
    Jan 23 12:09:47.958: Unexpected error:

        <*errors.errorString | 0xc001844640>: {
            s: "failed to get logs from pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680 for secret-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680)",

        }
        failed to get logs from pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680 for secret-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-9492168f-a605-4391-85d2-c96f3bbf4680)

... skipping 4 lines ...
  {"msg":"FAILED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":868,"failed":34,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 12:09:49.714: FAIL: unable to create test secret : Post "https://192.168.6.175:6443/api/v1/namespaces/secrets-1489/secrets": read tcp 172.18.0.3:41928->192.168.6.175:6443: read: connection reset by peer

... skipping 15 lines ...
  Jan 23 12:09:50.014: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:09:50.413: FAIL: Couldn't delete ns: "secrets-1489": Delete "https://192.168.6.175:6443/api/v1/namespaces/secrets-1489": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/secrets-1489", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc002740b00), hintErr:(*errors.errorString)(0xc0001184a0), hintCert:(*x509.Certificate)(0xc000531600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0028aa240, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 25 lines ...
  {"msg":"FAILED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":868,"failed":35,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:09:53.513: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:09:55.761: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:54614->192.168.6.175:6443: read: connection reset by peer

... skipping 5 lines ...
  Jan 23 12:10:01.832: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00393e9f0>: {

... skipping 41 lines ...
  Jan 23 12:10:02.153: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:10:02.664: FAIL: Couldn't delete ns: "secrets-9168": Delete "https://192.168.6.175:6443/api/v1/namespaces/secrets-9168": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/secrets-9168", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc002742100), hintErr:(*errors.errorString)(0xc0001184a0), hintCert:(*x509.Certificate)(0xc000531600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002031e60, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 12:10:01.832: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc00393e9f0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":868,"failed":36,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]}

... skipping 16 lines ...
  Jan 23 12:08:50.654: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f827" in namespace "subpath-6985" to be "Succeeded or Failed"

... skipping 31 lines ...
  Jan 23 12:10:18.224: INFO: Pod "pod-subpath-test-configmap-f827" satisfied condition "Succeeded or Failed"

... skipping 13 lines ...
  {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":26,"skipped":685,"failed":32,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","[sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]"]}

... skipping 162 lines ...
  {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":18,"skipped":504,"failed":26,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]"]}

... skipping 58 lines ...
  Jan 23 12:11:57.583: FAIL: Unexpected error:

      <*url.Error | 0xc004b2a720>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/api/v1/namespaces/projected-892/pods/labelsupdated6557872-c0e7-4734-b051-3bc1e6d0e539": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 18 lines ...
  Jan 23 12:11:58.005: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:11:58.406: FAIL: Couldn't delete ns: "projected-892": Delete "https://192.168.6.175:6443/api/v1/namespaces/projected-892": read tcp 172.18.0.3:44094->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/projected-892", Err:(*net.OpError)(0xc00296be00)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000d6c5a0, 0x112}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:11:57.583: Unexpected error:

        <*url.Error | 0xc004b2a720>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Get "https://192.168.6.175:6443/api/v1/namespaces/projected-892/pods/labelsupdated6557872-c0e7-4734-b051-3bc1e6d0e539": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":598,"failed":27,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]"]}

... skipping 51 lines ...
  Jan 23 12:13:25.701: FAIL: Unexpected error:

      <*url.Error | 0xc00426c210>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/api/v1/namespaces/projected-550/pods/labelsupdatebcfddd97-0f4e-4663-b4a5-da002d3f97a6": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 18 lines ...
  Jan 23 12:13:26.163: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:13:26.416: FAIL: Couldn't delete ns: "projected-550": Delete "https://192.168.6.175:6443/api/v1/namespaces/projected-550": read tcp 172.18.0.3:48090->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/projected-550", Err:(*net.OpError)(0xc002f13900)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0020b0a20, 0x112}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:13:25.701: Unexpected error:

        <*url.Error | 0xc00426c210>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Get "https://192.168.6.175:6443/api/v1/namespaces/projected-550/pods/labelsupdatebcfddd97-0f4e-4663-b4a5-da002d3f97a6": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":23,"skipped":392,"failed":15,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

... skipping 35 lines ...
  Jan 23 12:10:23.124: INFO: ExternalName service "services-5132/execpodf26hc" failed to resolve to IP

... skipping 2 lines ...
  Jan 23 12:10:26.228: INFO: ExternalName service "services-5132/execpodf26hc" failed to resolve to IP

... skipping 10 lines ...
  E0123 12:14:10.858179      17 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:41042->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:14:10.858: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:41042->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 12:14:11.220: FAIL: Couldn't delete ns: "services-5132": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-5132": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/services-5132", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0008c4000), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004c44000, 0xd3}, {0xc0027f8c08, 0x6ec4cca, 0xc0027f8c30})

... skipping 23 lines ...
    Jan 23 12:14:10.858: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:41042->192.168.6.175:6443: read: connection reset by peer

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":23,"skipped":392,"failed":16,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:14:11.599: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:14:13.730: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:37272->192.168.6.175:6443: read: connection reset by peer

... skipping 8 lines ...
  Jan 23 12:14:16.775: FAIL: Expected Service externalsvc to be running

  Unexpected error:

      <*url.Error | 0xc00424e960>: {

... skipping 31 lines ...
  Jan 23 12:14:17.157: FAIL: failed to delete service nodeport-service in namespace services-6029

  Unexpected error:

      <*url.Error | 0xc0018363c0>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Delete "https://192.168.6.175:6443/api/v1/namespaces/services-6029/services/nodeport-service": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 7 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000f0a900, 0x2c5}, {0xc0027f8e60, 0x6ec4cca, 0xc0027f8e80})

  	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7
  k8s.io/kubernetes/test/e2e/framework.Fail({0xc00239e840, 0x2b0}, {0xc00372ded8, 0xc00239e580, 0x6ec7043})

... skipping 24 lines ...
  Jan 23 12:14:17.535: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:14:17.983: FAIL: Couldn't delete ns: "services-6029": Delete "https://192.168.6.175:6443/api/v1/namespaces/services-6029": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/services-6029", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0008c6680), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0018f2900, 0x112}, {0xc0027f8c08, 0x6ec4cca, 0xc0027f8c30})

... skipping 24 lines ...
    Unexpected error:

        <*url.Error | 0xc00424e960>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":23,"skipped":392,"failed":17,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

... skipping 12 lines ...
  Jan 23 12:14:21.058: FAIL: creating CustomResourceDefinition

  Unexpected error:

      <*url.Error | 0xc00427cd20>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/apis/mygroup.example.com/v1beta1": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 28 lines ...
      Unexpected error:

          <*url.Error | 0xc00427cd20>: {

... skipping 118 lines ...
                      s: "crypto/rsa: verification error",

... skipping 99 lines ...
          Get "https://192.168.6.175:6443/apis/mygroup.example.com/v1beta1": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":23,"skipped":405,"failed":18,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]"]}

... skipping 9 lines ...
  Jan 23 12:14:22.862: FAIL: creating CustomResourceDefinition

  Unexpected error:

      <*url.Error | 0xc00346fbf0>: {

... skipping 31 lines ...
  Jan 23 12:14:23.117: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:14:23.477: FAIL: Couldn't delete ns: "custom-resource-definition-1570": Delete "https://192.168.6.175:6443/api/v1/namespaces/custom-resource-definition-1570": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/custom-resource-definition-1570", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc004af0100), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0008fd0e0, 0x112}, {0xc001ce2c08, 0x6ec4cca, 0xc001ce2c30})

... skipping 24 lines ...
      Unexpected error:

          <*url.Error | 0xc00346fbf0>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":23,"skipped":405,"failed":19,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:14:23.819: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:14:26.151: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  STEP: Waiting for a default service account to be provisioned in namespace
  E0123 12:14:30.208174      17 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/custom-resource-definition-9972/serviceaccounts?allowWatchBookmarks=true&fieldSelector=metadata.name%3Ddefault&resourceVersion=25949&timeout=6m36s&timeoutSeconds=396&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  Jan 23 12:14:34.966: FAIL: creating CustomResourceDefinition

  Unexpected error:

      <*url.Error | 0xc003c30420>: {

... skipping 31 lines ...
  Jan 23 12:14:35.346: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:14:35.809: FAIL: Couldn't delete ns: "custom-resource-definition-9972": Delete "https://192.168.6.175:6443/api/v1/namespaces/custom-resource-definition-9972": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/custom-resource-definition-9972", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc004aeeb00), hintErr:(*errors.errorString)(0xc0001924a0), hintCert:(*x509.Certificate)(0xc000c8fb80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc004c367e0, 0x112}, {0xc0027f4c08, 0x6ec4cca, 0xc0027f4c30})

... skipping 24 lines ...
      Unexpected error:

          <*url.Error | 0xc003c30420>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":23,"skipped":405,"failed":20,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]"]}

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":153,"failed":13,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

... skipping 207 lines ...
  Jan 23 12:14:42.256: FAIL: wait for pod pod3 timeout, err:Get "https://192.168.6.175:6443/api/v1/namespaces/hostport-4965/pods/pod3": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 24 lines ...
    Jan 23 12:14:42.256: wait for pod pod3 timeout, err:Get "https://192.168.6.175:6443/api/v1/namespaces/hostport-4965/pods/pod3": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 8 lines ...
  Jan 23 12:14:36.169: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:14:38.543: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  STEP: Waiting for a default service account to be provisioned in namespace
  W0123 12:14:42.295841      17 reflector.go:324] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/kubectl-3364/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:14:42.295939      17 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/kubectl-3364/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:14:44.064: FAIL: Couldn't delete ns: "kubectl-3364": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-3364": read tcp 172.18.0.3:56108->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/kubectl-3364", Err:(*net.OpError)(0xc00451d900)})

... skipping 22 lines ...
      Jan 23 12:14:44.064: Couldn't delete ns: "kubectl-3364": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-3364": read tcp 172.18.0.3:56108->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/kubectl-3364", Err:(*net.OpError)(0xc00451d900)})

... skipping 3 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":23,"skipped":410,"failed":21,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:14:44.380: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:14:48.383882      17 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/kubectl-631/serviceaccounts?allowWatchBookmarks=true&fieldSelector=metadata.name%3Ddefault&resourceVersion=26065&timeout=6m56s&timeoutSeconds=416&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 15 lines ...
  {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":24,"skipped":410,"failed":21,"failures":["[sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","[sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]"]}

... skipping 8 lines ...
  Jan 23 12:10:03.022: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:35944->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:10:05.329: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  Jan 23 12:10:09.987: INFO: Waiting up to 5m0s for pod "pod-20c4a364-f101-4dbe-b41a-b8c8b754098e" in namespace "emptydir-3252" to be "Succeeded or Failed"

... skipping 100 lines ...
  Jan 23 12:15:13.123: INFO: Failed to get logs from node "k8s-conformance-8hxc51-md-0-75bfdd6df6-9nww5" pod "pod-20c4a364-f101-4dbe-b41a-b8c8b754098e" container "test-container": the server rejected our request for an unknown reason (get pods pod-20c4a364-f101-4dbe-b41a-b8c8b754098e)

... skipping 10 lines ...
  Jan 23 12:15:24.826: FAIL: wait for pod "pod-20c4a364-f101-4dbe-b41a-b8c8b754098e" to disappear

  Expected success, but got an error:

      <*url.Error | 0xc0043b3770>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/api/v1/namespaces/emptydir-3252/pods": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 37 lines ...
    Expected success, but got an error:

        <*url.Error | 0xc0043b3770>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Get "https://192.168.6.175:6443/api/v1/namespaces/emptydir-3252/pods": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  {"msg":"FAILED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":889,"failed":37,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-apps] ReplicationController should release no longer matching pods [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","[sig-apps] Job should delete a job [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]"]}

... skipping 9 lines ...
  Jan 23 12:15:26.596: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc00333c510>: {

... skipping 41 lines ...
  Jan 23 12:15:26.985: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:15:27.365: FAIL: Couldn't delete ns: "emptydir-8770": Delete "https://192.168.6.175:6443/api/v1/namespaces/emptydir-8770": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/emptydir-8770", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc002742c00), hintErr:(*errors.errorString)(0xc0001184a0), hintCert:(*x509.Certificate)(0xc000531600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00128aa20, 0x112}, {0xc0012ccc08, 0x6ec4cca, 0xc0012ccc30})

... skipping 21 lines ...
    Jan 23 12:15:26.596: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc00333c510>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":153,"failed":14,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

... skipping 44 lines ...
  Jan 23 12:15:54.349: FAIL: wait for pod pod1 timeout, err:Get "https://192.168.6.175:6443/api/v1/namespaces/hostport-530/pods/pod1": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 15 lines ...
  Jan 23 12:15:54.817: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:15:55.329: FAIL: Couldn't delete ns: "hostport-530": Delete "https://192.168.6.175:6443/api/v1/namespaces/hostport-530": read tcp 172.18.0.3:59448->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/hostport-530", Err:(*net.OpError)(0xc0040bf180)})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000a478c0, 0x112}, {0xc0010b8c08, 0x6ec4cca, 0xc0010b8c30})

... skipping 21 lines ...
    Jan 23 12:15:54.349: wait for pod pod1 timeout, err:Get "https://192.168.6.175:6443/api/v1/namespaces/hostport-530/pods/pod1": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 3 lines ...
  {"msg":"FAILED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":153,"failed":15,"failures":["[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","[sig-node] PodTemplates should delete a collection of pod templates [Conformance]","[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]"]}

... skipping 14 lines ...
  Jan 23 12:15:56.915: FAIL: Error creating Pod

  Unexpected error:

      <*url.Error | 0xc004264f60>: {

... skipping 39 lines ...
  Jan 23 12:15:57.217: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:15:57.726: FAIL: Couldn't delete ns: "projected-1084": Delete "https://192.168.6.175:6443/api/v1/namespaces/projected-1084": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/projected-1084", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc001d64000), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000423b80)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000dba480, 0x112}, {0xc0010b8c08, 0x6ec4cca, 0xc0010b8c30})

... skipping 21 lines ...
    Jan 23 12:15:56.915: Error creating Pod

    Unexpected error:

        <*url.Error | 0xc004264f60>: {

... skipping 24 lines ...
  Jan 23 12:10:20.125: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:43866->192.168.6.175:6443: read: connection reset by peer

... skipping 113 lines ...
  Jan 23 12:15:27.497: INFO: Pod logs-generator failed to be running and ready, or succeeded.

  Jan 23 12:15:27.497: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: false. Pods: [logs-generator]
  Jan 23 12:15:27.497: FAIL: Pod logs-generator was not ready

... skipping 19 lines ...
  Jan 23 12:16:12.169: FAIL: Couldn't delete ns: "kubectl-8666": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-8666": read tcp 172.18.0.3:43346->192.168.6.175:6443: read: connection reset by peer (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/kubectl-8666", Err:(*net.OpError)(0xc002952320)})

... skipping 26 lines ...
  {"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":26,"skipped":711,"failed":33,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","[sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:16:12.523: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  STEP: Waiting for a default service account to be provisioned in namespace
  W0123 12:16:19.363275      35 reflector.go:324] k8s.io/client-go/tools/watch/informerwatcher.go:146: failed to list *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/kubectl-7052/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:16:19.363400      35 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/kubectl-7052/serviceaccounts?fieldSelector=metadata.name%3Ddefault&limit=500&resourceVersion=0": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 7 lines ...
  Jan 23 12:16:25.050: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7052 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7052 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s:

... skipping 3 lines ...
      Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

      
      error:

... skipping 22 lines ...
  Jan 23 12:16:26.543: FAIL: Unexpected error:

... skipping 2 lines ...
              s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7052 delete pod logs-generator:\nCommand stdout:\n\nstderr:\nError from server (NotFound): pods \"logs-generator\" not found\n\nerror:\nexit status 1",

... skipping 3 lines ...
      error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7052 delete pod logs-generator:

... skipping 3 lines ...
      Error from server (NotFound): pods "logs-generator" not found

      
      error:

... skipping 21 lines ...
  E0123 12:16:27.343719      35 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:53300->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:16:27.344: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:53300->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 12:16:27.653: FAIL: Couldn't delete ns: "kubectl-7052": Delete "https://192.168.6.175:6443/api/v1/namespaces/kubectl-7052": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/kubectl-7052", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc000762680), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000761080)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002f05260, 0xd3}, {0xc0020dcc08, 0x6ec4cca, 0xc0020dcc30})

... skipping 23 lines ...
      Jan 23 12:16:25.050: Unexpected error:

... skipping 2 lines ...
                  s: "error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7052 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"kubernetes\")\n\nerror:\nexit status 1",

... skipping 3 lines ...
          error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7052 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s:

... skipping 3 lines ...
          Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

          
          error:

... skipping 5 lines ...
  {"msg":"FAILED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":598,"failed":28,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]"]}

... skipping 90 lines ...
  Jan 23 12:16:22.294: FAIL: Unexpected error:

      <*url.Error | 0xc003a65500>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/api/v1/namespaces/projected-5570/pods/labelsupdatea5baab08-b493-4969-aa31-c757ca1afb0e": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 18 lines ...
  E0123 12:16:30.539557      21 request.go:1085] Unexpected error when reading response body: read tcp 172.18.0.3:53356->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:16:30.539: FAIL: All nodes should be ready after test, unexpected error when reading response body. Please retry. Original error: read tcp 172.18.0.3:53356->192.168.6.175:6443: read: connection reset by peer

... skipping 11 lines ...
  Jan 23 12:16:30.907: FAIL: Couldn't delete ns: "projected-5570": Delete "https://192.168.6.175:6443/api/v1/namespaces/projected-5570": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/projected-5570", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc0003a3700), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000181600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002cfea80, 0xd3}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:16:22.294: Unexpected error:

        <*url.Error | 0xc003a65500>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Get "https://192.168.6.175:6443/api/v1/namespaces/projected-5570/pods/labelsupdatea5baab08-b493-4969-aa31-c757ca1afb0e": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":598,"failed":29,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]"]}

... skipping 8 lines ...
  Jan 23 12:16:31.308: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:16:33.670: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 12 lines ...
  Jan 23 12:16:45.534: FAIL: Unexpected error:

      <*url.Error | 0xc002e06c30>: {

... skipping 31 lines ...
  E0123 12:16:45.900739      21 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.LimitRange: Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-6463/limitranges?allowWatchBookmarks=true&labelSelector=time%3D4075867146dc50269-8066-4e6a-acef-9cbd4a259563&resourceVersion=26833&timeout=8m38s&timeoutSeconds=518&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:16:45.965: FAIL: All nodes should be ready after test, Get "https://192.168.6.175:6443/api/v1/nodes": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 11 lines ...
  Jan 23 12:16:46.352: FAIL: Couldn't delete ns: "limitrange-6463": Delete "https://192.168.6.175:6443/api/v1/namespaces/limitrange-6463": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes") (&url.Error{Op:"Delete", URL:"https://192.168.6.175:6443/api/v1/namespaces/limitrange-6463", Err:x509.UnknownAuthorityError{Cert:(*x509.Certificate)(0xc002508000), hintErr:(*errors.errorString)(0xc00007c4b0), hintCert:(*x509.Certificate)(0xc000181600)}})

... skipping 4 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1()

... skipping 3 lines ...
  k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00237d320, 0x112}, {0xc002b62c08, 0x6ec4cca, 0xc002b62c30})

... skipping 21 lines ...
    Jan 23 12:16:45.534: Unexpected error:

        <*url.Error | 0xc002e06c30>: {

... skipping 19 lines ...
  {"msg":"FAILED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":18,"skipped":624,"failed":30,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]"]}

... skipping 5 lines ...
  Jan 23 12:16:46.760: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": read tcp 172.18.0.3:57984->192.168.6.175:6443: read: connection reset by peer

  Jan 23 12:16:49.125: INFO: Unexpected error while creating namespace: Post "https://192.168.6.175:6443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  E0123 12:16:52.839977      21 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8625/serviceaccounts?allowWatchBookmarks=true&fieldSelector=metadata.name%3Ddefault&resourceVersion=26884&timeout=5m39s&timeoutSeconds=339&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 13 lines ...
  E0123 12:16:54.855344      21 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.LimitRange: Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8625/limitranges?allowWatchBookmarks=true&labelSelector=time%3D537391691019c31fb-327d-4a3b-92d4-87e78133cd24&resourceVersion=26897&timeout=6m56s&timeoutSeconds=416&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:16:55.837: FAIL: Unexpected error:

      <*url.Error | 0xc002ce0bd0>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8625/pods/pod-no-resources": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 25 lines ...
    Jan 23 12:16:55.837: Unexpected error:

        <*url.Error | 0xc002ce0bd0>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8625/pods/pod-no-resources": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":18,"skipped":624,"failed":31,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Lease lease API should be available [Conformance]","[sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-network] Service endpoints latency should not be very high  [Conformance]","[sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-network] DNS should support configurable pod DNS nameservers [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","[sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","[sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]"]}

... skipping 6 lines ...
  E0123 12:16:58.912802      21 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ServiceAccount: Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8453/serviceaccounts?allowWatchBookmarks=true&fieldSelector=metadata.name%3Ddefault&resourceVersion=26915&timeout=7m23s&timeoutSeconds=443&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 16 lines ...
  E0123 12:17:00.957785      21 reflector.go:138] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.LimitRange: Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8453/limitranges?allowWatchBookmarks=true&labelSelector=time%3D912998183c2b0cb3b-bf6c-4768-b30f-037838a56bba&resourceVersion=26926&timeout=5m46s&timeoutSeconds=346&watch=true": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

  Jan 23 12:17:01.893: FAIL: Unexpected error:

      <*url.Error | 0xc0038a9140>: {

... skipping 118 lines ...
                  s: "crypto/rsa: verification error",

... skipping 99 lines ...
      Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8453/pods/pod-partial-resources": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 25 lines ...
    Jan 23 12:17:01.893: Unexpected error:

        <*url.Error | 0xc0038a9140>: {

... skipping 118 lines ...
                    s: "crypto/rsa: verification error",

... skipping 99 lines ...
        Get "https://192.168.6.175:6443/api/v1/namespaces/limitrange-8453/pods/pod-partial-resources": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

... skipping 4 lines ...
  {"msg":"FAILED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":18,"skipped":624,"failed":32,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:Clust