This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-15 20:29
Elapsed1h6m
Revisionmain

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 899 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 78 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading github.com/onsi/gomega v1.20.0
go: downloading k8s.io/apimachinery v0.25.0
go: downloading k8s.io/api v0.25.0
... skipping 221 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-soloe4-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-soloe4-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-soloe4 created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-soloe4-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-soloe4-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv, Cluster k8s-upgrade-and-conformance-mswovu/k8s-upgrade-and-conformance-soloe4: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4, Cluster k8s-upgrade-and-conformance-mswovu/k8s-upgrade-and-conformance-soloe4: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f, Cluster k8s-upgrade-and-conformance-mswovu/k8s-upgrade-and-conformance-soloe4: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-soloe4-mp-0, Cluster k8s-upgrade-and-conformance-mswovu/k8s-upgrade-and-conformance-soloe4: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/15/22 20:37:24.338
    INFO: Creating namespace k8s-upgrade-and-conformance-mswovu
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-mswovu"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep 15 20:47:31.033: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 15 20:47:31.038: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep 15 20:47:31.051: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep 15 20:47:31.098: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:31.099: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:31.099: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:31.099: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:31.099: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:31.099: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:31.099: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep 15 20:47:31.099: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:31.099: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:31.099: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:31.099: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:31.099: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:31.099: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:31.099: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:31.099: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:31.099: INFO: 
    Sep 15 20:47:33.122: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:33.122: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:33.122: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:33.122: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:33.122: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:33.122: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:33.122: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep 15 20:47:33.122: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:33.122: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:33.122: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:33.122: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:33.122: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:33.122: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:33.122: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:33.122: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:33.122: INFO: 
    Sep 15 20:47:35.125: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:35.125: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:35.125: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:35.125: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:35.125: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:35.125: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:35.126: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep 15 20:47:35.126: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:35.126: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:35.126: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:35.126: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:35.126: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:35.126: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:35.126: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:35.126: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:35.126: INFO: 
    Sep 15 20:47:37.119: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:37.119: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:37.119: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:37.119: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:37.119: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:37.119: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:37.119: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep 15 20:47:37.119: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:37.119: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:37.119: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:37.119: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:37.119: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:37.119: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:37.119: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:37.119: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:37.119: INFO: 
    Sep 15 20:47:39.142: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:39.142: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:39.142: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:39.142: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:39.142: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:39.142: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:39.142: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
    Sep 15 20:47:39.142: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:39.142: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:39.142: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:39.142: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:39.142: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:39.142: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:39.142: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:39.142: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:39.142: INFO: 
    Sep 15 20:47:41.119: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:41.120: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:41.120: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:41.120: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:41.120: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:41.120: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:41.120: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
    Sep 15 20:47:41.120: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:41.120: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:41.120: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:41.120: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:41.120: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:41.120: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:41.120: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:41.120: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:41.120: INFO: 
    Sep 15 20:47:43.121: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:43.121: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:43.121: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:43.121: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:43.121: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:43.121: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:43.121: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
    Sep 15 20:47:43.121: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:43.121: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:43.121: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:43.121: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:43.121: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:43.121: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:43.121: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:43.121: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:43.121: INFO: 
    Sep 15 20:47:45.124: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:45.124: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:45.124: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:45.124: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:45.124: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:45.124: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:45.124: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
    Sep 15 20:47:45.124: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:45.124: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:45.124: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:45.125: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:45.125: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:45.125: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:45.125: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:45.125: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:45.125: INFO: 
    Sep 15 20:47:47.120: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:47.120: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:47.121: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:47.121: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:47.121: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:47.121: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:47.121: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
    Sep 15 20:47:47.121: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:47.121: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:47.121: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:47.121: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:47.121: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:47.121: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:47.121: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:47.121: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:47.121: INFO: 
    Sep 15 20:47:49.126: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:49.126: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:49.126: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:49.126: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:49.126: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:49.126: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:49.126: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
    Sep 15 20:47:49.126: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:49.126: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:49.126: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:49.126: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:49.126: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:49.126: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:49.126: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:49.127: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:49.127: INFO: 
    Sep 15 20:47:51.120: INFO: The status of Pod coredns-558bd4d5db-v7wpm is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:51.121: INFO: The status of Pod coredns-558bd4d5db-zttsv is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:51.121: INFO: The status of Pod kindnet-gng7p is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:51.121: INFO: The status of Pod kindnet-sjt5g is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:51.121: INFO: The status of Pod kube-proxy-52jvh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:51.121: INFO: The status of Pod kube-proxy-xcc26 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:51.121: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
    Sep 15 20:47:51.121: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:51.121: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep 15 20:47:51.121: INFO: coredns-558bd4d5db-v7wpm  k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:51.121: INFO: coredns-558bd4d5db-zttsv  k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:44:47 +0000 UTC  }]
    Sep 15 20:47:51.121: INFO: kindnet-gng7p             k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:27 +0000 UTC  }]
    Sep 15 20:47:51.121: INFO: kindnet-sjt5g             k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:39:10 +0000 UTC  }]
    Sep 15 20:47:51.121: INFO: kube-proxy-52jvh          k8s-upgrade-and-conformance-soloe4-worker-25itt5  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:46:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:54 +0000 UTC  }]
    Sep 15 20:47:51.121: INFO: kube-proxy-xcc26          k8s-upgrade-and-conformance-soloe4-worker-s9lnsb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:45:00 +0000 UTC  }]
    Sep 15 20:47:51.121: INFO: 
    Sep 15 20:47:53.120: INFO: The status of Pod coredns-558bd4d5db-7rj7v is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:53.120: INFO: The status of Pod coredns-558bd4d5db-hrr6c is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:53.120: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (22 seconds elapsed)
    Sep 15 20:47:53.120: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready.
    Sep 15 20:47:53.120: INFO: POD                       NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 15 20:47:53.120: INFO: coredns-558bd4d5db-7rj7v  k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC  }]
    Sep 15 20:47:53.120: INFO: coredns-558bd4d5db-hrr6c  k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC  }]
    Sep 15 20:47:53.120: INFO: 
    Sep 15 20:47:55.119: INFO: The status of Pod coredns-558bd4d5db-hrr6c is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep 15 20:47:55.119: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (24 seconds elapsed)
    Sep 15 20:47:55.119: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep 15 20:47:55.119: INFO: POD                       NODE                                                            PHASE    GRACE  CONDITIONS
    Sep 15 20:47:55.119: INFO: coredns-558bd4d5db-hrr6c  k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 20:47:52 +0000 UTC  }]
    Sep 15 20:47:55.119: INFO: 
    Sep 15 20:47:57.122: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (26 seconds elapsed)
... skipping 32 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:47:57.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1" in namespace "downward-api-7231" to be "Succeeded or Failed"

    Sep 15 20:47:57.286: INFO: Pod "downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.159496ms
    Sep 15 20:47:59.290: INFO: Pod "downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014058221s
    Sep 15 20:48:01.314: INFO: Pod "downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.037400934s
    Sep 15 20:48:03.322: INFO: Pod "downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045258642s
    STEP: Saw pod success
    Sep 15 20:48:03.322: INFO: Pod "downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:03.325: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:48:03.362: INFO: Waiting for pod downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1 to disappear
    Sep 15 20:48:03.366: INFO: Pod downwardapi-volume-9bddaff6-0b92-4ac1-b41c-e67bafad48f1 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:03.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7231" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:03.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-7018" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    Sep 15 20:47:57.288: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-88853015-f25a-4e82-bb98-48c208b6a224
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:47:57.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1" in namespace "configmap-869" to be "Succeeded or Failed"

    Sep 15 20:47:57.322: INFO: Pod "pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.697375ms
    Sep 15 20:47:59.327: INFO: Pod "pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01454529s
    Sep 15 20:48:01.878: INFO: Pod "pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565752687s
    Sep 15 20:48:03.900: INFO: Pod "pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.588008778s
    STEP: Saw pod success
    Sep 15 20:48:03.900: INFO: Pod "pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:03.903: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:48:03.936: INFO: Waiting for pod pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1 to disappear
    Sep 15 20:48:03.943: INFO: Pod pod-configmaps-955ef3c6-df2d-4888-ae32-a486bbfea2c1 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:03.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-869" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":23,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    Sep 15 20:47:57.260: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-9556da03-2f29-44a5-95c0-a3ac673a0699
    STEP: Creating a pod to test consume secrets
    Sep 15 20:47:57.322: INFO: Waiting up to 5m0s for pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4" in namespace "secrets-1310" to be "Succeeded or Failed"

    Sep 15 20:47:57.329: INFO: Pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349726ms
    Sep 15 20:47:59.333: INFO: Pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010862273s
    Sep 15 20:48:01.879: INFO: Pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556468473s
    Sep 15 20:48:03.900: INFO: Pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577988425s
    Sep 15 20:48:05.905: INFO: Pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.582715531s
    STEP: Saw pod success
    Sep 15 20:48:05.905: INFO: Pod "pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:05.908: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:48:05.939: INFO: Waiting for pod pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4 to disappear
    Sep 15 20:48:05.942: INFO: Pod pod-secrets-a08160db-e4c5-4368-b071-6ceaf65d6de4 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:05.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1310" for this suite.
    STEP: Destroying namespace "secret-namespace-7259" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":27,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:03.958: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-d64bc9ff-51ee-43e8-a059-fe16c43f3271
    STEP: Creating a pod to test consume secrets
    Sep 15 20:48:04.008: INFO: Waiting up to 5m0s for pod "pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff" in namespace "secrets-9372" to be "Succeeded or Failed"

    Sep 15 20:48:04.019: INFO: Pod "pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff": Phase="Pending", Reason="", readiness=false. Elapsed: 11.823701ms
    Sep 15 20:48:06.024: INFO: Pod "pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016576822s
    STEP: Saw pod success
    Sep 15 20:48:06.024: INFO: Pod "pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:06.027: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:48:06.053: INFO: Waiting for pod pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff to disappear
    Sep 15 20:48:06.056: INFO: Pod pod-secrets-b7dbfc3d-1781-4620-b2a0-0fbeef4d02ff no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:06.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-9372" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:06.093: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 15 20:48:06.140: INFO: Waiting up to 5m0s for pod "downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c" in namespace "downward-api-271" to be "Succeeded or Failed"

    Sep 15 20:48:06.150: INFO: Pod "downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187316ms
    Sep 15 20:48:08.154: INFO: Pod "downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014272916s
    STEP: Saw pod success
    Sep 15 20:48:08.154: INFO: Pod "downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:08.157: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 20:48:08.177: INFO: Waiting for pod downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c to disappear
    Sep 15 20:48:08.183: INFO: Pod downward-api-c4c037fb-1580-4cc2-8932-66677de47d4c no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:08.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-271" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":108,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:10.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8576" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":3,"skipped":36,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:23.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-4444" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":3,"skipped":112,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-7313
    STEP: Creating statefulset with conflicting port in namespace statefulset-7313
    STEP: Waiting until pod test-pod will start running in namespace statefulset-7313
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7313
    Sep 15 20:48:16.468: INFO: Observed stateful pod in namespace: statefulset-7313, name: ss-0, uid: 41202395-f598-411b-ab4a-1c8a3dfd30a0, status phase: Pending. Waiting for statefulset controller to delete.
    Sep 15 20:48:17.052: INFO: Observed stateful pod in namespace: statefulset-7313, name: ss-0, uid: 41202395-f598-411b-ab4a-1c8a3dfd30a0, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 15 20:48:17.059: INFO: Observed stateful pod in namespace: statefulset-7313, name: ss-0, uid: 41202395-f598-411b-ab4a-1c8a3dfd30a0, status phase: Failed. Waiting for statefulset controller to delete.

    Sep 15 20:48:17.061: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7313
    STEP: Removing pod with conflicting port in namespace statefulset-7313
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7313 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
    Sep 15 20:48:21.089: INFO: Deleting all statefulset in ns statefulset-7313
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:31.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-7313" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":4,"skipped":91,"failed":0}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:31.137: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:39.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7803" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":5,"skipped":91,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:45.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1378" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":122,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:45.425: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-cb02181d-559a-4c74-932f-c0f57bcd0c13
    STEP: Creating a pod to test consume secrets
    Sep 15 20:48:45.463: INFO: Waiting up to 5m0s for pod "pod-secrets-9d94213a-6243-449c-8183-5e225025789e" in namespace "secrets-3856" to be "Succeeded or Failed"

    Sep 15 20:48:45.466: INFO: Pod "pod-secrets-9d94213a-6243-449c-8183-5e225025789e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.777454ms
    Sep 15 20:48:47.470: INFO: Pod "pod-secrets-9d94213a-6243-449c-8183-5e225025789e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00691334s
    STEP: Saw pod success
    Sep 15 20:48:47.470: INFO: Pod "pod-secrets-9d94213a-6243-449c-8183-5e225025789e" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:47.474: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-secrets-9d94213a-6243-449c-8183-5e225025789e container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:48:47.492: INFO: Waiting for pod pod-secrets-9d94213a-6243-449c-8183-5e225025789e to disappear
    Sep 15 20:48:47.495: INFO: Pod pod-secrets-9d94213a-6243-449c-8183-5e225025789e no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3856" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":150,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:47.544: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 15 20:48:47.580: INFO: Waiting up to 5m0s for pod "pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f" in namespace "emptydir-6817" to be "Succeeded or Failed"

    Sep 15 20:48:47.583: INFO: Pod "pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.909444ms
    Sep 15 20:48:49.587: INFO: Pod "pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007186514s
    STEP: Saw pod success
    Sep 15 20:48:49.588: INFO: Pod "pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:49.590: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:48:49.604: INFO: Waiting for pod pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f to disappear
    Sep 15 20:48:49.608: INFO: Pod pod-9f3d615f-36e7-4ac9-9d38-f20e8c67b91f no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-6817" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":176,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:49.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3653" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":7,"skipped":180,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:49.740: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-a4aba36c-bc33-4910-a363-aca3ca218443
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:48:49.779: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264" in namespace "projected-8431" to be "Succeeded or Failed"

    Sep 15 20:48:49.781: INFO: Pod "pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383893ms
    Sep 15 20:48:51.786: INFO: Pod "pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007250888s
    Sep 15 20:48:53.790: INFO: Pod "pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011235471s
    STEP: Saw pod success
    Sep 15 20:48:53.790: INFO: Pod "pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264" satisfied condition "Succeeded or Failed"

    Sep 15 20:48:53.792: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:48:53.818: INFO: Waiting for pod pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264 to disappear
    Sep 15 20:48:53.821: INFO: Pod pod-projected-configmaps-a727f692-6227-4a8f-9617-c726ccf5b264 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:53.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8431" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":182,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:55.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-2908" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":6,"skipped":118,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:48:55.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-132" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":7,"skipped":130,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:03.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-4530" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:03.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-6537" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":150,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:07.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-2585" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":9,"skipped":153,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-2426-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
    Sep 15 20:48:01.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep 15 20:48:03.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep 15 20:48:05.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871677, loc:(*time.Location)(0x9e363e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)}
    Sep 15 20:49:07.970: INFO: Waited 1m0.218977544s for the sample-apiserver to be ready to handle requests.
    Sep 15 20:49:07.970: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"5c292ea6-aa95-4776-abb8-47538fa060d6","resourceVersion":"3006","creationTimestamp":"2022-09-15T20:48:07Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-15T20:48:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-15T20:48:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},"spec":{"service":{"namespace":"aggregator-2953","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09URTFNakEwTnpVM1doY05Nekl3T1RFeU1qQTBOelUzV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURHRStEQmowcG5DaC9WYS9Sekp6d1YzZkpoN3FmaXJlQk1oQ0hRRWxPWlgwL08KcWgvU1Y3WFdmc1dqdGtPZjJMbXo2M0lSbWtpbE1taS9wQWUwbFEvTmZLRjJQLzVVVHMvZGVLMnBSQWdKeDZPZQpKeU9iNWdnZlBXY3R0SGhVSlFYUDZGc21uYWt3SkdtL28zeEZjZVEvejhqZCtBUGVzUGovTkgxWUp1ZG5DRnE3CjAvbE9iWEdRdjBvL3VBMWJqU2lyaTlQTmpHa1VseU1KVDdXa3RvM1FiSDNlMWJxVDJWQ3lIQUp6b2Y5K3c5U2MKQWFuVDNvNVVjS2NjeDgzTGFMTWRiZ080cTI1RXZ6aE9qRldKaVZJWm9uRzR4VlBLL3VBQ0R0Rzlielc1T25nYwpGOUZ2L254OWlrTDZST2Q2L3IvbVUzdVJuUzdSamUyemVYYVE0VzdKQWdNQkFBR2pRakJBTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSUnpJcWNFV1k1b3lHVU1kUjgKY1BvMS9hNVA3VEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBQXhFZGlDMGwyay9hQ0pkZ3FHTncxRFlBVnRCUwp0aDhpYitHZldQaCsvSExwOVhPOGlXaENnelUyc045MzdrQW5PV2tTMEdhVDlTeW05dTBaMWJOKzEvcS9jSnhaCkQxUllIOW41SVlCQ3l4dFN2ZVkrQ0dBaHFFVGNhVFZPczVuTk1tUURQdXZwWTVjaFdGcWlRY0lObmNiYWVTaFMKLzBkZkZTbU5jZm9USU9MMWpoQ3k3KzBDb0trNEg2ZDZqMCszQ1piNDcxVS9BWXh4dWphTlhXTHhqSWdGNXgrcQptVlBWKzF6dmNMZDQ2TGlydVRiRm5rc3pPaDlRSXVxVVpTL3N0ZUdYbWZMV2JuMGk0VFZLUjY5ZVRjeDFra3I5CkR4T1RROElsSE44WGJ0ZTVmaFduaHk0S0NRREVlSTViSEYzQ0RxR29EMXR0Z1N0VnMxaStLZE91TlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-15T20:48:07Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.133.224.180:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.133.224.180:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep 15 20:49:07.972: INFO: current pods: {"metadata":{"resourceVersion":"3334"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-pc69f","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-2953","uid":"edac13ee-a0eb-42ca-bfe7-88bb07fed157","resourceVersion":"2563","creationTimestamp":"2022-09-15T20:47:57Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"c8049e1c-cf28-4987-b53a-97d2f606f67a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-15T20:47:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8049e1c-cf28-4987-b53a-97d2f606f67a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-15T20:48:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-jh45d","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-jh45d","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-jh45d","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-soloe4-worker-3bhzw2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:47:57Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:48:06Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:48:06Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:47:57Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.3","podIPs":[{"ip":"192.168.2.3"}],"startTime":"2022-09-15T20:47:57Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-15T20:48:06Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","containerID":"containerd://09d29e219bd6243808235eb6475faaec6f9dfcec9dfeb749ccea50c3274f9aa9","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-15T20:48:03Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://96c15499f0cccc54d6f0e6b32dfba1680927071306e2691748e68428d3546a64","started":true}],"qosClass":"BestEffort"}}]}
    Sep 15 20:49:07.991: INFO: logs of sample-apiserver-deployment-64f6b9dc99-pc69f/sample-apiserver (error: <nil>): W0915 20:48:04.075301       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found

    W0915 20:48:04.075506       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0915 20:48:04.119442       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0915 20:48:04.119472       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0915 20:48:04.121024       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:48:04.121066       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0915 20:48:04.121348       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0915 20:48:04.363375       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:48:04.363566       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    W0915 20:48:04.363963       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0915 20:48:05.121888       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    W0915 20:48:05.364453       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

    I0915 20:48:06.903400       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:48:06.903444       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0915 20:48:06.904700       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:48:06.904740       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0915 20:48:06.906116       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:48:06.906150       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
... skipping 4 lines ...
    I0915 20:48:06.954421       1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key
    I0915 20:48:06.954486       1 secure_serving.go:178] Serving securely on [::]:443
    I0915 20:48:06.954567       1 tlsconfig.go:219] Starting DynamicServingCertificateController
    I0915 20:48:07.054242       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    I0915 20:48:07.054404       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    
    Sep 15 20:49:08.011: INFO: logs of sample-apiserver-deployment-64f6b9dc99-pc69f/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-15 20:48:06.464437 I | etcdmain: etcd Version: 3.4.13
    2022-09-15 20:48:06.464490 I | etcdmain: Git SHA: ae9734ed2
    2022-09-15 20:48:06.464494 I | etcdmain: Go Version: go1.12.17
    2022-09-15 20:48:06.464497 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-15 20:48:06.464502 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-15 20:48:06.464513 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-15 20:48:06.879777 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-15 20:48:06.879849 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-15 20:48:06.879931 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
    2022-09-15 20:48:06.880061 I | embed: ready to serve client requests
    2022-09-15 20:48:06.880859 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep 15 20:49:08.012: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:49:08.012: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 6 lines ...
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-249/configmap-test-793b5367-6ca6-45f3-aa17-3eca72968a99
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:49:08.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130" in namespace "configmap-249" to be "Succeeded or Failed"

    Sep 15 20:49:08.177: INFO: Pod "pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130": Phase="Pending", Reason="", readiness=false. Elapsed: 8.596825ms
    Sep 15 20:49:10.181: INFO: Pod "pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013183551s
    STEP: Saw pod success
    Sep 15 20:49:10.181: INFO: Pod "pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130" satisfied condition "Succeeded or Failed"

    Sep 15 20:49:10.185: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130 container env-test: <nil>
    STEP: delete the pod
    Sep 15 20:49:10.206: INFO: Waiting for pod pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130 to disappear
    Sep 15 20:49:10.209: INFO: Pod pod-configmaps-c33d9ddf-022b-4d93-9956-f01b70ff4130 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:10.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-249" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:49:10.245: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 15 20:49:10.284: INFO: Waiting up to 5m0s for pod "downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6" in namespace "downward-api-972" to be "Succeeded or Failed"

    Sep 15 20:49:10.288: INFO: Pod "downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.002166ms
    Sep 15 20:49:12.292: INFO: Pod "downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00745051s
    STEP: Saw pod success
    Sep 15 20:49:12.292: INFO: Pod "downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6" satisfied condition "Succeeded or Failed"

    Sep 15 20:49:12.296: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6 container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 20:49:12.314: INFO: Waiting for pod downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6 to disappear
    Sep 15 20:49:12.317: INFO: Pod downward-api-baafa64c-62b5-4147-be2b-974bbcca3ab6 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:12.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-972" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:48:53.841: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:48:53.891: INFO: created pod
    Sep 15 20:48:53.891: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2385" to be "Succeeded or Failed"

    Sep 15 20:48:53.895: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.159644ms
    Sep 15 20:48:55.899: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007885157s
    STEP: Saw pod success
    Sep 15 20:48:55.899: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep 15 20:49:25.900: INFO: polling logs
    Sep 15 20:49:25.907: INFO: Pod logs: 
    2022/09/15 20:48:54 OK: Got token
    2022/09/15 20:48:54 validating with in-cluster discovery
    2022/09/15 20:48:54 OK: got issuer https://kubernetes.default.svc.cluster.local
    2022/09/15 20:48:54 Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-2385" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":9,"skipped":188,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:26.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-2333" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":10,"skipped":189,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-lv2m
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 15 20:49:12.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lv2m" in namespace "subpath-1912" to be "Succeeded or Failed"

    Sep 15 20:49:12.403: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Pending", Reason="", readiness=false. Elapsed: 7.802609ms
    Sep 15 20:49:14.408: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.012182626s
    Sep 15 20:49:16.412: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 4.016193594s
    Sep 15 20:49:18.415: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 6.019318827s
    Sep 15 20:49:20.419: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 8.023420687s
    Sep 15 20:49:22.422: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 10.026528636s
    Sep 15 20:49:24.426: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 12.031028631s
    Sep 15 20:49:26.431: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 14.035524344s
    Sep 15 20:49:28.435: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 16.039729325s
    Sep 15 20:49:30.439: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 18.043788846s
    Sep 15 20:49:32.444: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Running", Reason="", readiness=true. Elapsed: 20.048504216s
    Sep 15 20:49:34.448: INFO: Pod "pod-subpath-test-configmap-lv2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.052248981s
    STEP: Saw pod success
    Sep 15 20:49:34.448: INFO: Pod "pod-subpath-test-configmap-lv2m" satisfied condition "Succeeded or Failed"

    Sep 15 20:49:34.451: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-subpath-test-configmap-lv2m container test-container-subpath-configmap-lv2m: <nil>
    STEP: delete the pod
    Sep 15 20:49:34.464: INFO: Waiting for pod pod-subpath-test-configmap-lv2m to disappear
    Sep 15 20:49:34.466: INFO: Pod pod-subpath-test-configmap-lv2m no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-lv2m
    Sep 15 20:49:34.466: INFO: Deleting pod "pod-subpath-test-configmap-lv2m" in namespace "subpath-1912"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:34.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-1912" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:49:45.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-7644" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering the custom resource webhook via the AdmissionRegistration API
    Sep 15 20:49:22.276: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:49:32.385: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:49:42.488: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:49:52.586: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:50:02.597: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:50:02.597: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002b8290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to deny custom resource creation, update and deletion [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:50:02.597: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002b8290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-wk5k
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 15 20:49:45.726: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wk5k" in namespace "subpath-3914" to be "Succeeded or Failed"

    Sep 15 20:49:45.730: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208448ms
    Sep 15 20:49:47.735: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 2.00899398s
    Sep 15 20:49:49.739: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 4.013515421s
    Sep 15 20:49:51.744: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 6.017684715s
    Sep 15 20:49:53.749: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 8.022907629s
    Sep 15 20:49:55.753: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 10.027230329s
    Sep 15 20:49:57.757: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 12.03143109s
    Sep 15 20:49:59.761: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 14.034596178s
    Sep 15 20:50:01.765: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 16.038845806s
    Sep 15 20:50:03.769: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 18.043424286s
    Sep 15 20:50:05.774: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Running", Reason="", readiness=true. Elapsed: 20.048459171s
    Sep 15 20:50:07.779: INFO: Pod "pod-subpath-test-configmap-wk5k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.053218389s
    STEP: Saw pod success
    Sep 15 20:50:07.779: INFO: Pod "pod-subpath-test-configmap-wk5k" satisfied condition "Succeeded or Failed"

    Sep 15 20:50:07.782: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod pod-subpath-test-configmap-wk5k container test-container-subpath-configmap-wk5k: <nil>
    STEP: delete the pod
    Sep 15 20:50:07.798: INFO: Waiting for pod pod-subpath-test-configmap-wk5k to disappear
    Sep 15 20:50:07.801: INFO: Pod pod-subpath-test-configmap-wk5k no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-wk5k
    Sep 15 20:50:07.801: INFO: Deleting pod "pod-subpath-test-configmap-wk5k" in namespace "subpath-3914"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:50:07.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-3914" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":9,"skipped":155,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:50:03.190: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-1210-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":10,"skipped":155,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:50:11.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-2542" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":109,"failed":0}

    
    S
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":0,"skipped":1,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:49:08.284: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering the sample API server.
    Sep 15 20:49:08.950: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
    Sep 15 20:49:11.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871749, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871748, loc:(*time.Location)(0x9e363e0)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871750, loc:(*time.Location)(0x9e363e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798871750, loc:(*time.Location)(0x9e363e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
    Sep 15 20:50:13.229: INFO: Waited 1m0.214220344s for the sample-apiserver to be ready to handle requests.
    Sep 15 20:50:13.229: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"9d9422b1-1d4b-4a4d-b223-96534b7cc1c3","resourceVersion":"3995","creationTimestamp":"2022-09-15T20:49:13Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-15T20:49:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-15T20:49:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},"spec":{"service":{"namespace":"aggregator-3609","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09URTFNakEwT1RBNFdoY05Nekl3T1RFeU1qQTBPVEE0V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURXd1hHbmNzZGJOZ2tVcTZjcnkvbFhqeGFEMFdTRzBlZ1lZMzhXZlRZSk11Z0sKdFNxV0lmZFBmZHdObk40MGw2c1phWEttdG1mc3BYK0dKblU2bjhJKzJXZ2ZFOHRmak8rdGl1bDJjRVFkako0SwpJZ3RNbGVqV3hQU1VFRHhvRHdjczcrMUtWakJFTzZ2cmlWMGtubTBDWHh0SlJtMHJIdGc1Ymk4QldHUjAxbC9wCkNMZGlXS1ZiLzlmd3RzaW1rdUtEVVgrelM0Q3VYS0xkaE5mT1hodW9EZ1hXM0FuR1EvUElNa0M3bjc3eWw0K3EKN0NacnhPU2p3N3Q4bUZyNHFSaUgxc2tmejNFZHFjSVRrNmN0UzJSZlplL2RVL1JMQ3h1L1djeERpL3FsWTlTVwpHN01nWFNaTXVqcXVEek1mR0VmcDNSMmVMUmM4WGY2d0xSWDRmSUUzQWdNQkFBR2pRakJBTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUUWM2cHlpeVUwZjJUdnFrbmYKQ2lrZHc0RlVtVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBdkg4WXJFbFovVDViMUFvc0QxQk1hVW1ySUV3VApUM1NEOXEwcTRFUzdNdXM4V0laSEhET0JsbnhkVHVlWWZjOG5LTExtaFdmS29HSVRmdDNRUmFLZ2NhREUva3FYCjdvOHczS3FzUUtVVFJOUXJ0QzZlT3VFbTJwTWY4SHp2NncwaFh1RGZVL2NhZzJHVlVpVDIvUnZ5d2t1cmZVNUkKMlZCSCtHbVUrVEIrMUdaZitRUlFIWFpqSERMVHB2ZVNud1Z1V1AxTnFqeWJ1c3FJcFZGSzlSOGFQL0E3UjZBMAo5b1F6dnRveE5UVVBlQk0rVjBXVldjY1Q1TlR2N0tBSWJNRmg0NE9WZVZ3UFlDeHJSeXBkREJPTkxUTC9qbnFSCnRuQllBS245WUN0ZDVVZjlYTVRsc1NIb1k3UUpYTjJQUWtjUm5BY29QL2ZpaFp6bjB6YnpKZUw2aGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-15T20:49:13Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.128.231.125:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.128.231.125:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep 15 20:50:13.229: INFO: current pods: {"metadata":{"resourceVersion":"4000"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-vxtpk","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-3609","uid":"964093ce-4012-40bf-8662-02562faca01a","resourceVersion":"3517","creationTimestamp":"2022-09-15T20:49:08Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"3e2ea88e-1224-40fc-a648-784df0b237d0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-15T20:49:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3e2ea88e-1224-40fc-a648-784df0b237d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-15T20:49:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.7\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-5jl4s","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-5jl4s","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-5jl4s","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-soloe4-worker-3bhzw2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:49:08Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:49:11Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:49:11Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:49:08Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.7","podIPs":[{"ip":"192.168.2.7"}],"startTime":"2022-09-15T20:49:08Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-15T20:49:09Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","containerID":"containerd://64374cfd02063e41cb11f3374e8451b83e3963055bd014d47246cfe6ff216e4f","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-15T20:49:11Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2022-09-15T20:49:09Z","finishedAt":"2022-09-15T20:49:09Z","containerID":"containerd://80c75a9dd3022a0f50a2be735bdee7dc94958a060ecd574704640a779f6391b0"}},"ready":true,"restartCount":1,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://b67af483c83efb2ce5d4031012877eba2f616c88eda71db91004e506900a6aa5","started":true}],"qosClass":"BestEffort"}}]}

    Sep 15 20:50:13.240: INFO: logs of sample-apiserver-deployment-64f6b9dc99-vxtpk/sample-apiserver (error: <nil>): W0915 20:49:11.589748       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found

    W0915 20:49:11.589883       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0915 20:49:11.606693       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0915 20:49:11.606723       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0915 20:49:11.608358       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:49:11.608406       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0915 20:49:11.609374       1 client.go:361] parsed scheme: "endpoint"
... skipping 11 lines ...
    I0915 20:49:11.658608       1 tlsconfig.go:219] Starting DynamicServingCertificateController
    I0915 20:49:11.758195       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    I0915 20:49:11.758244       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    I0915 20:49:12.145249       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:49:12.145367       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    
    Sep 15 20:50:13.251: INFO: logs of sample-apiserver-deployment-64f6b9dc99-vxtpk/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-15 20:49:09.835966 I | etcdmain: etcd Version: 3.4.13
    2022-09-15 20:49:09.836024 I | etcdmain: Git SHA: ae9734ed2
    2022-09-15 20:49:09.836044 I | etcdmain: Go Version: go1.12.17
    2022-09-15 20:49:09.836047 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-15 20:49:09.836051 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-15 20:49:09.836058 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-15 20:49:10.152220 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-15 20:49:10.152562 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-15 20:49:10.152598 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
    2022-09-15 20:49:10.152622 I | embed: ready to serve client requests
    2022-09-15 20:49:10.153692 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep 15 20:50:13.251: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:50:13.251: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 8 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service multi-endpoint-test in namespace services-9847
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9847 to expose endpoints map[]
    Sep 15 20:50:11.435: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep 15 20:50:12.449: INFO: successfully validated that service multi-endpoint-test in namespace services-9847 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-9847
    Sep 15 20:50:12.466: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 15 20:50:14.471: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9847 to expose endpoints map[pod1:[100]]
    Sep 15 20:50:14.492: INFO: successfully validated that service multi-endpoint-test in namespace services-9847 exposes endpoints map[pod1:[100]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-9847" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":11,"skipped":110,"failed":0}

    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:50:16.903: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename replicaset
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:50:27.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-4273" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":12,"skipped":110,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:50:49.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-5762" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":115,"failed":0}

    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:50:49.907: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-publish-openapi
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:00.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6806" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":14,"skipped":115,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:51:00.728: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5" in namespace "projected-2450" to be "Succeeded or Failed"

    Sep 15 20:51:00.733: INFO: Pod "downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644331ms
    Sep 15 20:51:02.739: INFO: Pod "downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010996903s
    STEP: Saw pod success
    Sep 15 20:51:02.739: INFO: Pod "downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5" satisfied condition "Succeeded or Failed"

    Sep 15 20:51:02.744: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:51:02.773: INFO: Waiting for pod downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5 to disappear
    Sep 15 20:51:02.776: INFO: Pod downwardapi-volume-c47db879-9703-4335-a37c-f86b8c5cf5f5 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:02.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2450" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":121,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:07.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2386" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":136,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:13.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-9422" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":17,"skipped":160,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:51:13.890: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-ecc7eff8-dec5-481c-9838-c35355ef35a4
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:51:13.972: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8" in namespace "projected-4441" to be "Succeeded or Failed"

    Sep 15 20:51:13.980: INFO: Pod "pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.106977ms
    Sep 15 20:51:15.988: INFO: Pod "pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015334346s
    STEP: Saw pod success
    Sep 15 20:51:15.988: INFO: Pod "pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8" satisfied condition "Succeeded or Failed"

    Sep 15 20:51:15.993: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:51:16.023: INFO: Waiting for pod pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8 to disappear
    Sep 15 20:51:16.029: INFO: Pod pod-projected-configmaps-9f1be935-0f94-4537-9464-8aa2826117a8 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:16.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4441" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":202,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":0,"skipped":1,"failed":2,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:50:13.641: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename aggregator
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
    [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering the sample API server.
    Sep 15 20:50:14.629: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
    Sep 15 20:51:17.313: INFO: Waited 1m0.367660231s for the sample-apiserver to be ready to handle requests.
    Sep 15 20:51:17.313: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"7873a2d8-b3fc-4ed6-9616-f5953455a6f3","resourceVersion":"4517","creationTimestamp":"2022-09-15T20:50:16Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-15T20:50:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2022-09-15T20:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},"spec":{"service":{"namespace":"aggregator-2884","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpJd09URTFNakExTURFMFdoY05Nekl3T1RFeU1qQTFNREUwV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUUMrbEd6d0ZhZmU0QTh5S0M5dUt0VEoxMjFJNVVOOW4rdWpUU2NzVEozWTI2TlUKS0ZZQWR2RUQ4U3dHNWNrY0p1RitrWG9NcG50d3I1YjVQVTFJQ2ZwQ0VqenMraU5ReXc3dG1KeHhNbTRMOXpzUwpxSUQwdlQ3NFBLam9jcERFZ1h4MlMvSXQ1YXlaTTBEYmhrNGRhMmMyNFVBRU1FYVhxZThoS05NTXpZSkZJRGJqCnRBYmI5MTJlOUlFemFnaC9nMGZOWWVaTzRFMUlBaHpoZUluSG43QlErMUFCdS9MU3I5bXI0dTBrNmVFazVjWHIKOG1UWDRwU1M3eFlXa3J0SGkrUXVoQnc5bTU3aFgwa0s5alBTVUpMaElkTE8yOU5sdkRJYjdPZ0xMOVBWMWdtQQpRR2o1cTBvTzhySkZBNHZDVnBndjVWaWZoRkVXSFlLWHA5R2ZFQ0IvQWdNQkFBR2pRakJBTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRbE1mYVI5bGhoUEtzNmhrMysKWGd1c2lkbllIVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBbTZQQ1RGNjRFTDFETkRuck1KUUNWZU10ZkhVYQowSjZ5TWcyNERGcDVCejF0TE5aN0xmYS9PeHZIRmJhNG9CaGhqNy81Zm9tNHY1alN5bmVtUGtSTnF2YjRpdHc0Ci9qSXBiV2VhNFZaMysyMVpvK2lUbGFHTTdhb0dRVUF5MFFOanF3OTNIbCt3S3dhbkpFWEZ3bFE2RXVYa2xaZk8KWXdtc3BBTVl0VVY2U0cxcmFWTkFoaFowU2trVkFBUEZITWRjU0xRSVJ1am9oYmx3Ri8yK1doczFXckpkYU1RRApROFdKUWVraDhOWENibFh3ME1NQmo5SlhZWURJd1JSK3RLZENsazdYbDk1UG1GNmpkZ1M0SENNbkRCUWd4MTdICnlOa0dyc1RDK0Q4OG13cFFSeVhTSlZUTzNJbkZHYnB5VWJ5OHFsc3pGOCs2dHJrekFGTUhBZURnK1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2022-09-15T20:50:18Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.136.220.69:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.136.220.69:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}}
    Sep 15 20:51:17.314: INFO: current pods: {"metadata":{"resourceVersion":"4566"},"items":[{"metadata":{"name":"sample-apiserver-deployment-64f6b9dc99-vpn8r","generateName":"sample-apiserver-deployment-64f6b9dc99-","namespace":"aggregator-2884","uid":"e4741784-d730-4134-a442-79d9dac13602","resourceVersion":"4173","creationTimestamp":"2022-09-15T20:50:14Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"64f6b9dc99"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-64f6b9dc99","uid":"f5183e44-8c44-4f11-9ca8-e8072ad56d39","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-09-15T20:50:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f5183e44-8c44-4f11-9ca8-e8072ad56d39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-09-15T20:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-c26vc","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-c26vc","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.4.13-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-c26vc","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-soloe4-worker-3bhzw2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:50:14Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:50:18Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:50:18Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-09-15T20:50:14Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.9","podIPs":[{"ip":"192.168.2.9"}],"startTime":"2022-09-15T20:50:14Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2022-09-15T20:50:15Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.4.13-0","imageID":"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934","containerID":"containerd://818e0bb310066855a87dbdbe596810508cddce6988bf6697969064a9c6a8c819","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2022-09-15T20:50:17Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2022-09-15T20:50:15Z","finishedAt":"2022-09-15T20:50:16Z","containerID":"containerd://dfe49d81b989c1a324840af97b1034d403cd5d7e03997d8bb581f997e7050556"}},"ready":true,"restartCount":1,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276","containerID":"containerd://0cc0795cb6752ce99188d697ac63171700b9088a68126b29624b59c0918880e2","started":true}],"qosClass":"BestEffort"}}]}

    Sep 15 20:51:17.324: INFO: logs of sample-apiserver-deployment-64f6b9dc99-vpn8r/sample-apiserver (error: <nil>): W0915 20:50:18.405365       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found

    W0915 20:50:18.405666       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
    I0915 20:50:18.434115       1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder.
    I0915 20:50:18.434200       1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook.
    I0915 20:50:18.437045       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:50:18.437103       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0915 20:50:18.441465       1 client.go:361] parsed scheme: "endpoint"
... skipping 11 lines ...
    I0915 20:50:18.536892       1 tlsconfig.go:219] Starting DynamicServingCertificateController
    I0915 20:50:18.554491       1 client.go:361] parsed scheme: "endpoint"
    I0915 20:50:18.554913       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
    I0915 20:50:18.636073       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
    I0915 20:50:18.638073       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
    
    Sep 15 20:51:17.337: INFO: logs of sample-apiserver-deployment-64f6b9dc99-vpn8r/etcd (error: <nil>): [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead

    2022-09-15 20:50:15.915658 I | etcdmain: etcd Version: 3.4.13
    2022-09-15 20:50:15.915773 I | etcdmain: Git SHA: ae9734ed2
    2022-09-15 20:50:15.915779 I | etcdmain: Go Version: go1.12.17
    2022-09-15 20:50:15.915784 I | etcdmain: Go OS/Arch: linux/amd64
    2022-09-15 20:50:15.915819 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
    2022-09-15 20:50:15.915856 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
... skipping 26 lines ...
    2022-09-15 20:50:16.845194 I | etcdserver: setting up the initial cluster version to 3.4
    2022-09-15 20:50:16.846051 I | embed: ready to serve client requests
    2022-09-15 20:50:16.846437 N | etcdserver/membership: set the initial cluster version to 3.4
    2022-09-15 20:50:16.851703 I | etcdserver/api: enabled capabilities for version 3.4
    2022-09-15 20:50:16.857106 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
    
    Sep 15 20:51:17.338: FAIL: gave up waiting for apiservice wardle to come up successfully

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 22 lines ...
    [sig-api-machinery] Aggregator
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:51:17.338: gave up waiting for apiservice wardle to come up successfully
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:406
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":0,"skipped":1,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:18.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-6106" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":222,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:51:18.387: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-3ba26037-ffa0-4b7f-89d6-9d371feeee11
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:51:18.490: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49" in namespace "configmap-3023" to be "Succeeded or Failed"

    Sep 15 20:51:18.495: INFO: Pod "pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.78658ms
    Sep 15 20:51:20.504: INFO: Pod "pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013947332s
    STEP: Saw pod success
    Sep 15 20:51:20.504: INFO: Pod "pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49" satisfied condition "Succeeded or Failed"

    Sep 15 20:51:20.512: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:51:20.573: INFO: Waiting for pod pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49 to disappear
    Sep 15 20:51:20.593: INFO: Pod pod-configmaps-f2b44fc6-ea27-496e-88b0-71ae8c93ee49 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:20.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3023" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":260,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:51:20.666: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 15 20:51:20.747: INFO: Waiting up to 5m0s for pod "security-context-faf8b226-093c-487e-b304-bf258aeb5dd2" in namespace "security-context-8018" to be "Succeeded or Failed"

    Sep 15 20:51:20.753: INFO: Pod "security-context-faf8b226-093c-487e-b304-bf258aeb5dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.605088ms
    Sep 15 20:51:22.759: INFO: Pod "security-context-faf8b226-093c-487e-b304-bf258aeb5dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012265572s
    Sep 15 20:51:24.768: INFO: Pod "security-context-faf8b226-093c-487e-b304-bf258aeb5dd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021168107s
    STEP: Saw pod success
    Sep 15 20:51:24.769: INFO: Pod "security-context-faf8b226-093c-487e-b304-bf258aeb5dd2" satisfied condition "Succeeded or Failed"

    Sep 15 20:51:24.776: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod security-context-faf8b226-093c-487e-b304-bf258aeb5dd2 container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:51:24.806: INFO: Waiting for pod security-context-faf8b226-093c-487e-b304-bf258aeb5dd2 to disappear
    Sep 15 20:51:24.813: INFO: Pod security-context-faf8b226-093c-487e-b304-bf258aeb5dd2 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:24.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-8018" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":262,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:29.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-9971" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":293,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:31.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8298" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:42.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5858" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":23,"skipped":306,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:51:42.874: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep 15 20:51:42.937: INFO: Waiting up to 5m0s for pod "var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812" in namespace "var-expansion-491" to be "Succeeded or Failed"

    Sep 15 20:51:42.942: INFO: Pod "var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812": Phase="Pending", Reason="", readiness=false. Elapsed: 4.554802ms
    Sep 15 20:51:44.949: INFO: Pod "var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010696312s
    STEP: Saw pod success
    Sep 15 20:51:44.949: INFO: Pod "var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812" satisfied condition "Succeeded or Failed"

    Sep 15 20:51:44.953: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812 container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 20:51:44.975: INFO: Waiting for pod var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812 to disappear
    Sep 15 20:51:44.980: INFO: Pod var-expansion-d769c0ad-a0b7-4b46-98a1-2275f00c4812 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:44.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-491" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":24,"skipped":358,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:45.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-4964" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":25,"skipped":379,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:51:51.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-8849" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":39,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-5767
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[]
    Sep 15 20:51:51.962: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep 15 20:51:52.978: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-5767
    Sep 15 20:51:53.000: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep 15 20:51:55.005: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[pod1:[80]]
    Sep 15 20:51:55.026: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[pod1:[80]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-5767" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":3,"skipped":55,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:49:26.026: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep 15 20:51:26.612: INFO: Successfully updated pod "var-expansion-80c8362e-1acf-47ec-8bcf-644c3a7e47af"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep 15 20:51:28.625: INFO: Deleting pod "var-expansion-80c8362e-1acf-47ec-8bcf-644c3a7e47af" in namespace "var-expansion-6716"
    Sep 15 20:51:28.634: INFO: Wait up to 5m0s for pod "var-expansion-80c8362e-1acf-47ec-8bcf-644c3a7e47af" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:154.676 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":11,"skipped":190,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:52:00.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5" in namespace "projected-8309" to be "Succeeded or Failed"

    Sep 15 20:52:00.808: INFO: Pod "downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975997ms
    Sep 15 20:52:02.818: INFO: Pod "downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015361195s
    STEP: Saw pod success
    Sep 15 20:52:02.819: INFO: Pod "downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5" satisfied condition "Succeeded or Failed"

    Sep 15 20:52:02.823: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:52:02.862: INFO: Waiting for pod downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5 to disappear
    Sep 15 20:52:02.867: INFO: Pod downwardapi-volume-28cfd7a9-dea7-4a90-b9d4-5d1887777fd5 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8309" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":195,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:04.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-2607" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":4,"skipped":153,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-7j2n
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 15 20:51:45.334: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7j2n" in namespace "subpath-4158" to be "Succeeded or Failed"

    Sep 15 20:51:45.339: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386138ms
    Sep 15 20:51:47.345: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 2.010433177s
    Sep 15 20:51:49.351: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 4.016775534s
    Sep 15 20:51:51.359: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 6.024875774s
    Sep 15 20:51:53.367: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 8.032586286s
    Sep 15 20:51:55.373: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 10.038892612s
    Sep 15 20:51:57.388: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 12.053460085s
    Sep 15 20:51:59.395: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 14.060904645s
    Sep 15 20:52:01.405: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 16.070314983s
    Sep 15 20:52:03.410: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 18.076130904s
    Sep 15 20:52:05.422: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Running", Reason="", readiness=true. Elapsed: 20.087267529s
    Sep 15 20:52:07.429: INFO: Pod "pod-subpath-test-secret-7j2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.094423685s
    STEP: Saw pod success
    Sep 15 20:52:07.429: INFO: Pod "pod-subpath-test-secret-7j2n" satisfied condition "Succeeded or Failed"

    Sep 15 20:52:07.434: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod pod-subpath-test-secret-7j2n container test-container-subpath-secret-7j2n: <nil>
    STEP: delete the pod
    Sep 15 20:52:07.468: INFO: Waiting for pod pod-subpath-test-secret-7j2n to disappear
    Sep 15 20:52:07.474: INFO: Pod pod-subpath-test-secret-7j2n no longer exists
    STEP: Deleting pod pod-subpath-test-secret-7j2n
    Sep 15 20:52:07.474: INFO: Deleting pod "pod-subpath-test-secret-7j2n" in namespace "subpath-4158"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:07.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-4158" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":382,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:07.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-9799" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":27,"skipped":385,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:09.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-2764" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":396,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-1159-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":13,"skipped":226,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:52:10.255: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 15 20:52:10.494: INFO: Waiting up to 5m0s for pod "pod-a4b4db64-7809-46b6-8b03-8b9b81848a84" in namespace "emptydir-3150" to be "Succeeded or Failed"

    Sep 15 20:52:10.528: INFO: Pod "pod-a4b4db64-7809-46b6-8b03-8b9b81848a84": Phase="Pending", Reason="", readiness=false. Elapsed: 34.489814ms
    Sep 15 20:52:12.537: INFO: Pod "pod-a4b4db64-7809-46b6-8b03-8b9b81848a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042592327s
    Sep 15 20:52:14.542: INFO: Pod "pod-a4b4db64-7809-46b6-8b03-8b9b81848a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048528713s
    STEP: Saw pod success
    Sep 15 20:52:14.543: INFO: Pod "pod-a4b4db64-7809-46b6-8b03-8b9b81848a84" satisfied condition "Succeeded or Failed"

    Sep 15 20:52:14.547: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod pod-a4b4db64-7809-46b6-8b03-8b9b81848a84 container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:52:14.588: INFO: Waiting for pod pod-a4b4db64-7809-46b6-8b03-8b9b81848a84 to disappear
    Sep 15 20:52:14.593: INFO: Pod pod-a4b4db64-7809-46b6-8b03-8b9b81848a84 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:14.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3150" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":227,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:14.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6974" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":15,"skipped":240,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:16.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-1255" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":29,"skipped":400,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:16.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4120" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":30,"skipped":410,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:17.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-4701" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":16,"skipped":253,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:52:16.462: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep 15 20:52:18.547: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:18.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8572" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":446,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:52:17.132: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep 15 20:52:17.220: INFO: Waiting up to 5m0s for pod "pod-0cf76d20-5966-4bd4-a85b-488cb708295b" in namespace "emptydir-1287" to be "Succeeded or Failed"

    Sep 15 20:52:17.234: INFO: Pod "pod-0cf76d20-5966-4bd4-a85b-488cb708295b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.337156ms
    Sep 15 20:52:19.243: INFO: Pod "pod-0cf76d20-5966-4bd4-a85b-488cb708295b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022780264s
    STEP: Saw pod success
    Sep 15 20:52:19.243: INFO: Pod "pod-0cf76d20-5966-4bd4-a85b-488cb708295b" satisfied condition "Succeeded or Failed"

    Sep 15 20:52:19.248: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod pod-0cf76d20-5966-4bd4-a85b-488cb708295b container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:52:19.272: INFO: Waiting for pod pod-0cf76d20-5966-4bd4-a85b-488cb708295b to disappear
    Sep 15 20:52:19.277: INFO: Pod pod-0cf76d20-5966-4bd4-a85b-488cb708295b no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:19.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1287" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":257,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-4984-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":32,"skipped":451,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:23.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-9679" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":33,"skipped":452,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:52:23.693: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a" in namespace "projected-7216" to be "Succeeded or Failed"

    Sep 15 20:52:23.699: INFO: Pod "downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.468556ms
    Sep 15 20:52:25.704: INFO: Pod "downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011323341s
    STEP: Saw pod success
    Sep 15 20:52:25.705: INFO: Pod "downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a" satisfied condition "Succeeded or Failed"

    Sep 15 20:52:25.710: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:52:25.735: INFO: Waiting for pod downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a to disappear
    Sep 15 20:52:25.739: INFO: Pod downwardapi-volume-4ee34427-1125-430a-b456-7e4ddfc4628a no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:25.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7216" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":466,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:52:25.778: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:52:27.872: INFO: Deleting pod "var-expansion-eeabf770-732c-49d0-af12-3c094859c7b3" in namespace "var-expansion-9477"
    Sep 15 20:52:27.883: INFO: Wait up to 5m0s for pod "var-expansion-eeabf770-732c-49d0-af12-3c094859c7b3" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:35.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-9477" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":35,"skipped":474,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep 15 20:52:08.876: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:08.884: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:08.909: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:08.915: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:08.923: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:08.930: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:08.953: INFO: Lookups using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local]

    
    Sep 15 20:52:13.961: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:13.967: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:13.972: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:13.978: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:13.999: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:14.011: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:14.017: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:14.023: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:14.037: INFO: Lookups using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local]

    
    Sep 15 20:52:18.961: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:18.967: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:18.975: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:18.981: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:19.003: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:19.008: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:19.014: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:19.020: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:19.035: INFO: Lookups using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local]

    
    Sep 15 20:52:23.960: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:23.968: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:23.973: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:23.980: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:24.001: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:24.007: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:24.012: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:24.018: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:24.031: INFO: Lookups using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local]

    
    Sep 15 20:52:28.960: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:28.968: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:28.974: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:28.981: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:29.002: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:29.007: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:29.014: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:29.021: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:29.034: INFO: Lookups using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local]

    
    Sep 15 20:52:33.960: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:33.967: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:33.972: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:33.977: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:33.995: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:34.000: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:34.007: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:34.013: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local from pod dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a: the server could not find the requested resource (get pods dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a)
    Sep 15 20:52:34.024: INFO: Lookups using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9724.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9724.svc.cluster.local jessie_udp@dns-test-service-2.dns-9724.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9724.svc.cluster.local]

    
    Sep 15 20:52:39.039: INFO: DNS probes using dns-9724/dns-test-10a2e6d9-52a2-4e28-bc31-90e8b6cf508a succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:39.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9724" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":5,"skipped":166,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-2zpf
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 15 20:52:19.440: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2zpf" in namespace "subpath-2739" to be "Succeeded or Failed"

    Sep 15 20:52:19.456: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.625862ms
    Sep 15 20:52:21.462: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 2.021533188s
    Sep 15 20:52:23.472: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 4.032212761s
    Sep 15 20:52:25.480: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 6.039921003s
    Sep 15 20:52:27.486: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 8.046064516s
    Sep 15 20:52:29.493: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 10.053221056s
    Sep 15 20:52:31.500: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 12.059845313s
    Sep 15 20:52:33.507: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 14.066588857s
    Sep 15 20:52:35.514: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 16.073772674s
    Sep 15 20:52:37.521: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 18.080803738s
    Sep 15 20:52:39.528: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Running", Reason="", readiness=true. Elapsed: 20.08833761s
    Sep 15 20:52:41.538: INFO: Pod "pod-subpath-test-projected-2zpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.098227093s
    STEP: Saw pod success
    Sep 15 20:52:41.538: INFO: Pod "pod-subpath-test-projected-2zpf" satisfied condition "Succeeded or Failed"

    Sep 15 20:52:41.546: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-subpath-test-projected-2zpf container test-container-subpath-projected-2zpf: <nil>
    STEP: delete the pod
    Sep 15 20:52:41.575: INFO: Waiting for pod pod-subpath-test-projected-2zpf to disappear
    Sep 15 20:52:41.580: INFO: Pod pod-subpath-test-projected-2zpf no longer exists
    STEP: Deleting pod pod-subpath-test-projected-2zpf
    Sep 15 20:52:41.580: INFO: Deleting pod "pod-subpath-test-projected-2zpf" in namespace "subpath-2739"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:52:41.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-2739" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":268,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:11.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-4892" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":19,"skipped":294,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:12.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5554" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":36,"skipped":501,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:53:11.979: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-c567fa98-ef6e-4d20-8460-4ea37f4717d0
    STEP: Creating a pod to test consume secrets
    Sep 15 20:53:12.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7" in namespace "projected-9916" to be "Succeeded or Failed"

    Sep 15 20:53:12.055: INFO: Pod "pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.733919ms
    Sep 15 20:53:14.061: INFO: Pod "pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01205284s
    STEP: Saw pod success
    Sep 15 20:53:14.061: INFO: Pod "pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7" satisfied condition "Succeeded or Failed"

    Sep 15 20:53:14.067: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:53:14.097: INFO: Waiting for pod pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7 to disappear
    Sep 15 20:53:14.101: INFO: Pod pod-projected-secrets-c42f1f24-6162-4f74-b450-f23bfd0cb3c7 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:14.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9916" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":298,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:15.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-6598" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":21,"skipped":330,"failed":0}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:17.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2383" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":348,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:53:17.512: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-9559/configmap-test-108317ee-8eed-4d9c-a37e-9810e47d10fa
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:53:17.608: INFO: Waiting up to 5m0s for pod "pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e" in namespace "configmap-9559" to be "Succeeded or Failed"

    Sep 15 20:53:17.616: INFO: Pod "pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.707412ms
    Sep 15 20:53:19.625: INFO: Pod "pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016370401s
    STEP: Saw pod success
    Sep 15 20:53:19.625: INFO: Pod "pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e" satisfied condition "Succeeded or Failed"

    Sep 15 20:53:19.631: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e container env-test: <nil>
    STEP: delete the pod
    Sep 15 20:53:19.659: INFO: Waiting for pod pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e to disappear
    Sep 15 20:53:19.664: INFO: Pod pod-configmaps-f899b0c7-7841-4f70-9f03-7061496d5e5e no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:19.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9559" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":354,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:21.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7106" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":382,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:53:21.897: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 15 20:53:21.960: INFO: Waiting up to 5m0s for pod "pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3" in namespace "emptydir-317" to be "Succeeded or Failed"

    Sep 15 20:53:21.966: INFO: Pod "pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.212006ms
    Sep 15 20:53:23.973: INFO: Pod "pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011869002s
    STEP: Saw pod success
    Sep 15 20:53:23.973: INFO: Pod "pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3" satisfied condition "Succeeded or Failed"

    Sep 15 20:53:23.978: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3 container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:53:24.006: INFO: Waiting for pod pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3 to disappear
    Sep 15 20:53:24.011: INFO: Pod pod-292eb430-f3d4-46a8-a64d-8b81bb6509a3 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:24.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-317" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":396,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:26.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-117" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":26,"skipped":408,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
    STEP: Destroying namespace "services-5171" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":538,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:36.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-7310" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":38,"skipped":570,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:53:36.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e" in namespace "projected-950" to be "Succeeded or Failed"

    Sep 15 20:53:36.406: INFO: Pod "downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400607ms
    Sep 15 20:53:38.417: INFO: Pod "downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015006787s
    STEP: Saw pod success
    Sep 15 20:53:38.417: INFO: Pod "downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e" satisfied condition "Succeeded or Failed"

    Sep 15 20:53:38.424: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:53:38.446: INFO: Waiting for pod downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e to disappear
    Sep 15 20:53:38.453: INFO: Pod downwardapi-volume-5391b45c-cf80-4ef2-8c00-d3b2d0af393e no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:53:38.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-950" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":575,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:01.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-3289" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":6,"skipped":176,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:03.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1093" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":190,"failed":3,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:243.047 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":167,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:54:13.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398" in namespace "downward-api-7225" to be "Succeeded or Failed"

    Sep 15 20:54:13.104: INFO: Pod "downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140123ms
    Sep 15 20:54:15.109: INFO: Pod "downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0116168s
    STEP: Saw pod success
    Sep 15 20:54:15.110: INFO: Pod "downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398" satisfied condition "Succeeded or Failed"

    Sep 15 20:54:15.114: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:54:15.158: INFO: Waiting for pod downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398 to disappear
    Sep 15 20:54:15.162: INFO: Pod downwardapi-volume-1fbc442b-f62f-4503-b227-2eb336f80398 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:15.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7225" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":169,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:15.216: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep 15 20:54:15.274: INFO: Waiting up to 5m0s for pod "client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62" in namespace "containers-2573" to be "Succeeded or Failed"

    Sep 15 20:54:15.278: INFO: Pod "client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409451ms
    Sep 15 20:54:17.284: INFO: Pod "client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009793499s
    STEP: Saw pod success
    Sep 15 20:54:17.285: INFO: Pod "client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62" satisfied condition "Succeeded or Failed"

    Sep 15 20:54:17.289: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:54:17.310: INFO: Waiting for pod client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62 to disappear
    Sep 15 20:54:17.314: INFO: Pod client-containers-a90f9a51-fcc1-42b6-bc2c-792c8b315f62 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:17.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-2573" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":180,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:19.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3329" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":182,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:19.461: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:28.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-9830" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":15,"skipped":182,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1102-crds.webhook.example.com via the AdmissionRegistration API
    Sep 15 20:53:52.419: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:54:02.535: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:54:12.639: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:54:22.738: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:54:32.754: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:54:32.754: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002be280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with pruning [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:54:32.755: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002be280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:42.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-833" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":249,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":39,"skipped":577,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:33.459: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
    STEP: Destroying namespace "webhook-8664-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":40,"skipped":577,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:42.884: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-b076e42e-0bf1-427a-9d3e-076b087f895a
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:42.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3763" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":41,"skipped":589,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:43.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-6571" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":42,"skipped":591,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7458" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":346,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:47.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-8211" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":43,"skipped":592,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:48.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-6768" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":18,"skipped":369,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:48.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-6895" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":19,"skipped":372,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:47.693: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 15 20:54:47.760: INFO: Waiting up to 5m0s for pod "downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d" in namespace "downward-api-1460" to be "Succeeded or Failed"

    Sep 15 20:54:47.770: INFO: Pod "downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.822297ms
    Sep 15 20:54:49.777: INFO: Pod "downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017304057s
    STEP: Saw pod success
    Sep 15 20:54:49.777: INFO: Pod "downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d" satisfied condition "Succeeded or Failed"

    Sep 15 20:54:49.783: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 20:54:49.810: INFO: Waiting for pod downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d to disappear
    Sep 15 20:54:49.815: INFO: Pod downward-api-72cabc63-8cfe-4312-9dde-ecaa9a50ff4d no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:49.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1460" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":619,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:54:54.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1194" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":652,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:54.139: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-d0db88f3-5e25-473d-8073-c5f68c44fe31
    STEP: Creating a pod to test consume secrets
    Sep 15 20:54:54.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602" in namespace "projected-5909" to be "Succeeded or Failed"

    Sep 15 20:54:54.204: INFO: Pod "pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223026ms
    Sep 15 20:54:56.210: INFO: Pod "pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009978089s
    STEP: Saw pod success
    Sep 15 20:54:56.210: INFO: Pod "pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602" satisfied condition "Succeeded or Failed"

    Sep 15 20:54:56.215: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:54:56.247: INFO: Waiting for pod pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602 to disappear
    Sep 15 20:54:56.250: INFO: Pod pod-projected-secrets-f546d9ac-0b53-4c73-b76e-4758b8acd602 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 15 20:54:07.556: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
    [It] should be able to convert a non homogeneous list of CRs [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:54:07.560: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 15 20:54:20.174: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-8458-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9023.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:54:30.285: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-8458-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9023.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:54:40.383: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-8458-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9023.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:54:50.497: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-8458-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9023.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:00.503: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-8458-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-9023.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:00.504: FAIL: Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    • Failure [57.689 seconds]
    [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to convert a non homogeneous list of CRs [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:55:00.504: Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:04.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5280" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":20,"skipped":380,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":680,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:54:56.266: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename resourcequota
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:24.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5386" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":47,"skipped":680,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:24.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2419" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":48,"skipped":681,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:55:24.764: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 15 20:55:24.822: INFO: Waiting up to 5m0s for pod "pod-9f103d54-5703-44f0-abfb-f0863ca60719" in namespace "emptydir-4700" to be "Succeeded or Failed"

    Sep 15 20:55:24.830: INFO: Pod "pod-9f103d54-5703-44f0-abfb-f0863ca60719": Phase="Pending", Reason="", readiness=false. Elapsed: 7.931074ms
    Sep 15 20:55:26.838: INFO: Pod "pod-9f103d54-5703-44f0-abfb-f0863ca60719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016024046s
    STEP: Saw pod success
    Sep 15 20:55:26.838: INFO: Pod "pod-9f103d54-5703-44f0-abfb-f0863ca60719" satisfied condition "Succeeded or Failed"

    Sep 15 20:55:26.843: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod pod-9f103d54-5703-44f0-abfb-f0863ca60719 container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:55:26.869: INFO: Waiting for pod pod-9f103d54-5703-44f0-abfb-f0863ca60719 to disappear
    Sep 15 20:55:26.879: INFO: Pod pod-9f103d54-5703-44f0-abfb-f0863ca60719 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:26.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4700" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":730,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:37.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-3017" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":50,"skipped":743,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:37.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-3096" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":51,"skipped":764,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:55:37.257: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 15 20:55:37.314: INFO: Waiting up to 5m0s for pod "downward-api-f1a93b4e-4391-49a1-8672-469410b05782" in namespace "downward-api-4529" to be "Succeeded or Failed"

    Sep 15 20:55:37.317: INFO: Pod "downward-api-f1a93b4e-4391-49a1-8672-469410b05782": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190614ms
    Sep 15 20:55:39.324: INFO: Pod "downward-api-f1a93b4e-4391-49a1-8672-469410b05782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010287361s
    STEP: Saw pod success
    Sep 15 20:55:39.324: INFO: Pod "downward-api-f1a93b4e-4391-49a1-8672-469410b05782" satisfied condition "Succeeded or Failed"

    Sep 15 20:55:39.331: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod downward-api-f1a93b4e-4391-49a1-8672-469410b05782 container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 20:55:39.372: INFO: Waiting for pod downward-api-f1a93b4e-4391-49a1-8672-469410b05782 to disappear
    Sep 15 20:55:39.376: INFO: Pod downward-api-f1a93b4e-4391-49a1-8672-469410b05782 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:39.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4529" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":766,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:55:39.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b" in namespace "downward-api-2176" to be "Succeeded or Failed"

    Sep 15 20:55:39.472: INFO: Pod "downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.64532ms
    Sep 15 20:55:41.478: INFO: Pod "downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010264245s
    STEP: Saw pod success
    Sep 15 20:55:41.478: INFO: Pod "downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b" satisfied condition "Succeeded or Failed"

    Sep 15 20:55:41.483: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:55:41.511: INFO: Waiting for pod downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b to disappear
    Sep 15 20:55:41.518: INFO: Pod downwardapi-volume-e59c64eb-48d0-42f1-a60e-ac3be7a60c2b no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:55:41.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2176" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":769,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":204,"failed":4,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:55:01.126: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 15 20:55:04.821: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
    [It] should be able to convert a non homogeneous list of CRs [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:55:04.830: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 15 20:55:17.458: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-5664-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7666.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:27.564: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-5664-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7666.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:37.667: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-5664-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7666.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:47.772: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-5664-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7666.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:57.781: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-5664-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-7666.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:55:57.782: FAIL: Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    • Failure [57.299 seconds]
    [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to convert a non homogeneous list of CRs [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:55:57.782: Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:56:27.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2925" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":390,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:56:27.607: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 15 20:56:27.667: INFO: Waiting up to 5m0s for pod "pod-dda4b934-817a-4765-abc2-7c899029744e" in namespace "emptydir-8643" to be "Succeeded or Failed"

    Sep 15 20:56:27.672: INFO: Pod "pod-dda4b934-817a-4765-abc2-7c899029744e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416372ms
    Sep 15 20:56:29.677: INFO: Pod "pod-dda4b934-817a-4765-abc2-7c899029744e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00971764s
    STEP: Saw pod success
    Sep 15 20:56:29.677: INFO: Pod "pod-dda4b934-817a-4765-abc2-7c899029744e" satisfied condition "Succeeded or Failed"

    Sep 15 20:56:29.682: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-dda4b934-817a-4765-abc2-7c899029744e container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:56:29.703: INFO: Waiting for pod pod-dda4b934-817a-4765-abc2-7c899029744e to disappear
    Sep 15 20:56:29.708: INFO: Pod pod-dda4b934-817a-4765-abc2-7c899029744e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:56:29.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8643" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":397,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:56:35.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-6760" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":23,"skipped":412,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:56:41.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-4879" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":791,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":204,"failed":5,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:55:58.430: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 15 20:56:02.207: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
    [It] should be able to convert a non homogeneous list of CRs [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:56:02.212: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep 15 20:56:14.826: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-675-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-2536.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:56:24.935: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-675-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-2536.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:56:35.033: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-675-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-2536.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:56:45.142: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-675-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-2536.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:56:55.151: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-675-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-2536.svc:9443/crdconvert?timeout=30s": net/http: TLS handshake timeout

    Sep 15 20:56:55.152: FAIL: Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    • Failure [57.345 seconds]
    [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to convert a non homogeneous list of CRs [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:56:55.152: Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":204,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:00.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3418" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":24,"skipped":416,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:04.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-895" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":25,"skipped":426,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:04.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-8213" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":26,"skipped":457,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:05.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-3791" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":227,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:57:04.975: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759" in namespace "projected-3226" to be "Succeeded or Failed"

    Sep 15 20:57:04.978: INFO: Pod "downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962627ms
    Sep 15 20:57:06.984: INFO: Pod "downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00831098s
    STEP: Saw pod success
    Sep 15 20:57:06.984: INFO: Pod "downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759" satisfied condition "Succeeded or Failed"

    Sep 15 20:57:06.988: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:57:07.014: INFO: Waiting for pod downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759 to disappear
    Sep 15 20:57:07.018: INFO: Pod downwardapi-volume-97049cc2-278a-4bf8-b7b9-c41919888759 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:07.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3226" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":479,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:57:07.117: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849" in namespace "downward-api-247" to be "Succeeded or Failed"

    Sep 15 20:57:07.120: INFO: Pod "downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918849ms
    Sep 15 20:57:09.124: INFO: Pod "downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007189394s
    STEP: Saw pod success
    Sep 15 20:57:09.124: INFO: Pod "downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849" satisfied condition "Succeeded or Failed"

    Sep 15 20:57:09.128: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:57:09.152: INFO: Waiting for pod downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849 to disappear
    Sep 15 20:57:09.155: INFO: Pod downwardapi-volume-35cbb40c-8cb3-4d90-9bfe-e5107e269849 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:09.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-247" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":503,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:57:09.169: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-3cc15c8e-a734-4d5c-a01b-b7fc9379eff0
    STEP: Creating a pod to test consume secrets
    Sep 15 20:57:09.219: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f" in namespace "projected-977" to be "Succeeded or Failed"

    Sep 15 20:57:09.221: INFO: Pod "pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.698782ms
    Sep 15 20:57:11.226: INFO: Pod "pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007364853s
    STEP: Saw pod success
    Sep 15 20:57:11.226: INFO: Pod "pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f" satisfied condition "Succeeded or Failed"

    Sep 15 20:57:11.229: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:57:11.244: INFO: Waiting for pod pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f to disappear
    Sep 15 20:57:11.247: INFO: Pod pod-projected-secrets-cb4cd9b7-1005-4539-b32e-bcff63a7514f no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:11.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-977" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":505,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:57:11.267: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep 15 20:57:11.306: INFO: Waiting up to 5m0s for pod "pod-becbd0da-d666-4746-ab21-aa7b584d07f1" in namespace "emptydir-9544" to be "Succeeded or Failed"

    Sep 15 20:57:11.309: INFO: Pod "pod-becbd0da-d666-4746-ab21-aa7b584d07f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664525ms
    Sep 15 20:57:13.314: INFO: Pod "pod-becbd0da-d666-4746-ab21-aa7b584d07f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00798427s
    STEP: Saw pod success
    Sep 15 20:57:13.314: INFO: Pod "pod-becbd0da-d666-4746-ab21-aa7b584d07f1" satisfied condition "Succeeded or Failed"

    Sep 15 20:57:13.317: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-becbd0da-d666-4746-ab21-aa7b584d07f1 container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:57:13.329: INFO: Waiting for pod pod-becbd0da-d666-4746-ab21-aa7b584d07f1 to disappear
    Sep 15 20:57:13.331: INFO: Pod pod-becbd0da-d666-4746-ab21-aa7b584d07f1 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:13.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9544" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":513,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:57:13.404: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469" in namespace "projected-3770" to be "Succeeded or Failed"

    Sep 15 20:57:13.407: INFO: Pod "downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469": Phase="Pending", Reason="", readiness=false. Elapsed: 3.171006ms
    Sep 15 20:57:15.411: INFO: Pod "downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00638345s
    STEP: Saw pod success
    Sep 15 20:57:15.411: INFO: Pod "downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469" satisfied condition "Succeeded or Failed"

    Sep 15 20:57:15.414: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:57:15.428: INFO: Waiting for pod downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469 to disappear
    Sep 15 20:57:15.432: INFO: Pod downwardapi-volume-7cba7465-3d1f-46ea-8ba7-2ec6bb675469 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3770" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":526,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:25.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-3963" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":282,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-scheduling] LimitRange
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:32.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-6444" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":10,"skipped":289,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-9578" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":32,"skipped":529,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-7382-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":318,"failed":6,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Registering the webhook via the AdmissionRegistration API
    Sep 15 20:56:56.145: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:57:06.259: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:57:16.360: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:57:26.456: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:57:36.466: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:57:36.467: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002be280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should be able to deny attaching pod [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:57:36.467: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002be280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 31 lines ...
    STEP: Destroying namespace "webhook-1156-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":33,"skipped":533,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":54,"skipped":801,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:57:36.520: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-8191-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":55,"skipped":801,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "crd-webhook-170" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":56,"skipped":806,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:57:37.998: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:57:40.045: INFO: Deleting pod "var-expansion-2743ac3e-0989-47cd-922b-c97e5eb53b3b" in namespace "var-expansion-4293"
    Sep 15 20:57:40.051: INFO: Wait up to 5m0s for pod "var-expansion-2743ac3e-0989-47cd-922b-c97e5eb53b3b" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:52.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-4293" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":34,"skipped":538,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:57:55.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-3462" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":57,"skipped":811,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-3678-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":58,"skipped":826,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:57:59.290: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep 15 20:57:59.339: INFO: Waiting up to 5m0s for pod "pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb" in namespace "emptydir-7462" to be "Succeeded or Failed"

    Sep 15 20:57:59.342: INFO: Pod "pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977769ms
    Sep 15 20:58:01.347: INFO: Pod "pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008642353s
    STEP: Saw pod success
    Sep 15 20:58:01.347: INFO: Pod "pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb" satisfied condition "Succeeded or Failed"

    Sep 15 20:58:01.351: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:58:01.371: INFO: Waiting for pod pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb to disappear
    Sep 15 20:58:01.374: INFO: Pod pod-ea48f151-90d1-4a13-bdea-b6d3a3a887cb no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:01.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7462" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":849,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:07.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-6629" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":60,"skipped":893,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:58:07.588: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 15 20:58:07.631: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:10.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-5540" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":61,"skipped":933,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:10.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-5952" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":62,"skipped":960,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:58:11.021: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 15 20:58:11.073: INFO: Waiting up to 5m0s for pod "pod-00a29dc4-5603-4692-8893-32b470233369" in namespace "emptydir-9735" to be "Succeeded or Failed"

    Sep 15 20:58:11.077: INFO: Pod "pod-00a29dc4-5603-4692-8893-32b470233369": Phase="Pending", Reason="", readiness=false. Elapsed: 3.981906ms
    Sep 15 20:58:13.083: INFO: Pod "pod-00a29dc4-5603-4692-8893-32b470233369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009244497s
    STEP: Saw pod success
    Sep 15 20:58:13.083: INFO: Pod "pod-00a29dc4-5603-4692-8893-32b470233369" satisfied condition "Succeeded or Failed"

    Sep 15 20:58:13.086: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod pod-00a29dc4-5603-4692-8893-32b470233369 container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:58:13.110: INFO: Waiting for pod pod-00a29dc4-5603-4692-8893-32b470233369 to disappear
    Sep 15 20:58:13.112: INFO: Pod pod-00a29dc4-5603-4692-8893-32b470233369 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:13.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9735" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":968,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:29.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-65" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":64,"skipped":997,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep 15 20:57:36.998: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 15 20:57:40.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    Sep 15 20:57:50.038: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:58:00.163: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:58:10.256: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:58:20.352: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:58:30.363: INFO: Waiting for webhook configuration to be ready...
    Sep 15 20:58:30.363: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    
    • Failure [54.104 seconds]
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should unconditionally reject operations on fail closed webhook [Conformance] [It]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 20:58:30.363: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1275
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":11,"skipped":331,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:58:30.438: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep 15 20:58:31.182: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep 15 20:58:34.203: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:34.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-2" for this suite.
    STEP: Destroying namespace "webhook-2-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":12,"skipped":331,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:34.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-2260" for this suite.
    
    •S
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":65,"skipped":1000,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:58:34.464: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-7783/secret-test-e1a4b2bc-8523-4398-96dd-28cc750d55e1
    STEP: Creating a pod to test consume secrets
    Sep 15 20:58:34.537: INFO: Waiting up to 5m0s for pod "pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa" in namespace "secrets-7783" to be "Succeeded or Failed"

    Sep 15 20:58:34.574: INFO: Pod "pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa": Phase="Pending", Reason="", readiness=false. Elapsed: 37.465237ms
    Sep 15 20:58:36.580: INFO: Pod "pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042790438s
    STEP: Saw pod success
    Sep 15 20:58:36.580: INFO: Pod "pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa" satisfied condition "Succeeded or Failed"

    Sep 15 20:58:36.584: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa container env-test: <nil>
    STEP: delete the pod
    Sep 15 20:58:36.782: INFO: Waiting for pod pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa to disappear
    Sep 15 20:58:36.785: INFO: Pod pod-configmaps-da4a33e6-934e-4a5c-b2dd-204cf20facaa no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:36.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7783" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":383,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:58:36.811: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-c6b3d44e-7227-4f80-84bb-065babce26a6
    STEP: Creating a pod to test consume secrets
    Sep 15 20:58:36.854: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea" in namespace "projected-7433" to be "Succeeded or Failed"

    Sep 15 20:58:36.858: INFO: Pod "pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.188876ms
    Sep 15 20:58:38.862: INFO: Pod "pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007070727s
    STEP: Saw pod success
    Sep 15 20:58:38.862: INFO: Pod "pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea" satisfied condition "Succeeded or Failed"

    Sep 15 20:58:38.865: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:58:38.880: INFO: Waiting for pod pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea to disappear
    Sep 15 20:58:38.885: INFO: Pod pod-projected-secrets-37179b86-9172-4611-9728-2a321ccd39ea no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:38.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7433" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":392,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    Sep 15 20:58:42.126: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep 15 20:58:42.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2887 describe pod agnhost-primary-rr54h'
    Sep 15 20:58:42.231: INFO: stderr: ""
    Sep 15 20:58:42.231: INFO: stdout: "Name:         agnhost-primary-rr54h\nNamespace:    kubectl-2887\nPriority:     0\nNode:         k8s-upgrade-and-conformance-soloe4-worker-3bhzw2/172.18.0.7\nStart Time:   Thu, 15 Sep 2022 20:58:39 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.2.50\nIPs:\n  IP:           192.168.2.50\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://d85aa87f73dcf6f004cc1ca1f63c62772832d261535d1fe8bce2c57ccdc98b9a\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 15 Sep 2022 20:58:40 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2csd4 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-2csd4:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-2887/agnhost-primary-rr54h to k8s-upgrade-and-conformance-soloe4-worker-3bhzw2\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
    Sep 15 20:58:42.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2887 describe rc agnhost-primary'
    Sep 15 20:58:42.351: INFO: stderr: ""
    Sep 15 20:58:42.352: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-2887\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-rr54h\n"

    Sep 15 20:58:42.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2887 describe service agnhost-primary'
    Sep 15 20:58:42.451: INFO: stderr: ""
    Sep 15 20:58:42.451: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-2887\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.140.149.105\nIPs:               10.140.149.105\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.2.50:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep 15 20:58:42.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2887 describe node k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv'
    Sep 15 20:58:42.582: INFO: stderr: ""
    Sep 15 20:58:42.582: INFO: stdout: "Name:               k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-soloe4\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-mswovu\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-soloe4-fvv82\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 15 Sep 2022 20:40:33 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 15 Sep 2022 20:58:36 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 15 Sep 2022 20:56:29 +0000   Thu, 15 Sep 2022 20:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 15 Sep 2022 20:56:29 +0000   Thu, 15 Sep 2022 20:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 15 Sep 2022 20:56:29 +0000   Thu, 15 Sep 2022 20:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 15 Sep 2022 20:56:29 +0000   Thu, 15 Sep 2022 20:41:25 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 1940ddc4344a473397b58ea81967e009\n  System UUID:                34f69f78-712d-4497-aebb-baed30cf6ce7\n  Boot ID:                    f2a6caee-491f-4ce9-a86e-dacfa0f49447\n  Kernel Version:             5.4.0-1076-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.21.14\n  Kube-Proxy Version:         v1.21.14\nPodCIDR:                      192.168.5.0/24\nPodCIDRs:                     192.168.5.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m\n  kube-system                 kindnet-2nrwg                                                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m\n  kube-system                 kube-apiserver-k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-controller-manager-k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m\n  kube-system                 kube-proxy-9m8z7                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m\n  kube-system                 kube-scheduler-k8s-upgrade-and-conformance-soloe4-fvv82-6s6wv             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             150Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type    Reason    Age   From        Message\n  ----    ------    ----  ----        -------\n  Normal  Starting  17m   kube-proxy  Starting kube-proxy.\n  Normal  Starting  13m   kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:42.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2887" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":15,"skipped":417,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
    STEP: Destroying namespace "webhook-6720-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":16,"skipped":450,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:58:46.532: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 20:58:46.577: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8bdc9cba-db94-4894-b210-b7abd047bf93" in namespace "security-context-test-7107" to be "Succeeded or Failed"

    Sep 15 20:58:46.581: INFO: Pod "busybox-user-65534-8bdc9cba-db94-4894-b210-b7abd047bf93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.592587ms
    Sep 15 20:58:48.586: INFO: Pod "busybox-user-65534-8bdc9cba-db94-4894-b210-b7abd047bf93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007783633s
    Sep 15 20:58:48.586: INFO: Pod "busybox-user-65534-8bdc9cba-db94-4894-b210-b7abd047bf93" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:48.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-7107" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":480,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:58:50.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-8247" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":511,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:00.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-2049" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":35,"skipped":566,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    • [SLOW TEST:334.105 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":27,"skipped":412,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:01.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-9677" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":36,"skipped":586,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:01.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8688" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":19,"skipped":512,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:59:01.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365" in namespace "projected-4306" to be "Succeeded or Failed"

    Sep 15 20:59:01.385: INFO: Pod "downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195023ms
    Sep 15 20:59:03.389: INFO: Pod "downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007584172s
    Sep 15 20:59:05.392: INFO: Pod "downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010500395s
    STEP: Saw pod success
    Sep 15 20:59:05.392: INFO: Pod "downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:05.395: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:59:05.408: INFO: Waiting for pod downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365 to disappear
    Sep 15 20:59:05.411: INFO: Pod downwardapi-volume-ab08d2d7-73ff-4484-b525-c4d6499d5365 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:05.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4306" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":614,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep 15 20:59:08.032: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-7922  0a46d62e-b62a-42bc-a5b5-0f235f2f9890 10403 3 2022-09-15 20:59:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 92da796c-66f9-4acc-8db1-f2c05cbb2a1e 0xc003531577 0xc003531578}] []  [{kube-controller-manager Update apps/v1 2022-09-15 20:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92da796c-66f9-4acc-8db1-f2c05cbb2a1e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035315f8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep 15 20:59:08.032: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep 15 20:59:08.032: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-7922  a75ebd6a-9f3b-4e26-905b-25c6c3abec1c 10401 3 2022-09-15 20:59:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 92da796c-66f9-4acc-8db1-f2c05cbb2a1e 0xc003531657 0xc003531658}] []  [{kube-controller-manager Update apps/v1 2022-09-15 20:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92da796c-66f9-4acc-8db1-f2c05cbb2a1e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035316c8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep 15 20:59:08.061: INFO: Pod "webserver-deployment-795d758f88-2jdkq" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-2jdkq webserver-deployment-795d758f88- deployment-7922  7ac81340-2415-4d5a-a33b-4ba742770fc6 10390 0 2022-09-15 20:59:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc003b040c7 0xc003b040c8}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-15 20:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dw9q7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dw9q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.38,StartTime:2022-09-15 20:59:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 15 20:59:08.062: INFO: Pod "webserver-deployment-795d758f88-66h8w" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-66h8w webserver-deployment-795d758f88- deployment-7922  a116ed9b-6eda-4366-86a6-7180361ebe49 10396 0 2022-09-15 20:59:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc003b047d0 0xc003b047d1}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-15 20:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kxlq9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxlq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.31,StartTime:2022-09-15 20:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 15 20:59:08.063: INFO: Pod "webserver-deployment-795d758f88-6bsjw" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-6bsjw webserver-deployment-795d758f88- deployment-7922  894ad91f-c323-4e36-8dab-b72489320a3a 10443 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc003b04e80 0xc003b04e81}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-knzcw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knzcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.063: INFO: Pod "webserver-deployment-795d758f88-7x6sd" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-7x6sd webserver-deployment-795d758f88- deployment-7922  e06ce781-a787-4fbd-824e-746c391715ac 10439 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc003b052a7 0xc003b052a8}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rhhbk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhhbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.063: INFO: Pod "webserver-deployment-795d758f88-86kpl" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-86kpl webserver-deployment-795d758f88- deployment-7922  b6be697e-a71c-4263-a568-be45bca4538c 10393 0 2022-09-15 20:59:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc003b05867 0xc003b05868}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-15 20:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dc84k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dc84k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-worker-3bhzw2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.56,StartTime:2022-09-15 20:59:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 15 20:59:08.064: INFO: Pod "webserver-deployment-795d758f88-gms74" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-gms74 webserver-deployment-795d758f88- deployment-7922  2fbe671d-7e6e-4eca-8e59-6d7b6fa4af4b 10432 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc003b05ea0 0xc003b05ea1}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hwmbt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hwmbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.064: INFO: Pod "webserver-deployment-795d758f88-gzz4p" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-gzz4p webserver-deployment-795d758f88- deployment-7922  c15744e6-7778-4fb0-8532-3fceb0a86b39 10428 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc0022820d7 0xc0022820d8}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zfjzh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zfjzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.064: INFO: Pod "webserver-deployment-795d758f88-hjf6h" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-hjf6h webserver-deployment-795d758f88- deployment-7922  8184ac32-0aa9-4242-97ca-edcc18c73b52 10429 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc002282240 0xc002282241}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6glt9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6glt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.064: INFO: Pod "webserver-deployment-795d758f88-krmvj" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-krmvj webserver-deployment-795d758f88- deployment-7922  d2ce2c77-ae03-4a8a-85c2-94c753ce658a 10330 0 2022-09-15 20:59:06 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc002282410 0xc002282411}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-15 20:59:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-86r4s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86r4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-worker-w58p08,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2022-09-15 20:59:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.065: INFO: Pod "webserver-deployment-795d758f88-pwn92" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-pwn92 webserver-deployment-795d758f88- deployment-7922  b1940ca6-505d-404d-b492-4046e316998e 10419 0 2022-09-15 20:59:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc0022825f0 0xc0022825f1}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dgjql,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgjql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-worker-3bhzw2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.065: INFO: Pod "webserver-deployment-795d758f88-rsz6j" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-rsz6j webserver-deployment-795d758f88- deployment-7922  6411b417-c45e-4e19-aafb-11ce2df4904f 10385 0 2022-09-15 20:59:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc002282760 0xc002282761}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-15 20:59:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p2twg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2twg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-worker-w58p08,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.64,StartTime:2022-09-15 20:59:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep 15 20:59:08.067: INFO: Pod "webserver-deployment-795d758f88-s4v99" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-s4v99 webserver-deployment-795d758f88- deployment-7922  f4c46d76-29ad-4d57-909c-25485f1e9eda 10438 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 0a46d62e-b62a-42bc-a5b5-0f235f2f9890 0xc002282960 0xc002282961}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a46d62e-b62a-42bc-a5b5-0f235f2f9890\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lfhqj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfhqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.067: INFO: Pod "webserver-deployment-847dcfb7fb-45vl5" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-45vl5 webserver-deployment-847dcfb7fb- deployment-7922  38758555-c4b7-4d4f-9537-37349ceadbea 10205 0 2022-09-15 20:59:01 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a75ebd6a-9f3b-4e26-905b-25c6c3abec1c 0xc002282aa7 0xc002282aa8}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75ebd6a-9f3b-4e26-905b-25c6c3abec1c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-15 20:59:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7th9z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7th9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.37,StartTime:2022-09-15 20:59:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-15 20:59:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://036f81abb5abe60773c1ec3695b962a7d61773c6ca767645c2c1a5a053049000,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep 15 20:59:08.067: INFO: Pod "webserver-deployment-847dcfb7fb-46xvt" is not available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-46xvt webserver-deployment-847dcfb7fb- deployment-7922  310f71ca-d5a7-4122-968c-1ad0c516f4bc 10424 0 2022-09-15 20:59:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a75ebd6a-9f3b-4e26-905b-25c6c3abec1c 0xc002282c70 0xc002282c71}] []  [{kube-controller-manager Update v1 2022-09-15 20:59:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a75ebd6a-9f3b-4e26-905b-25c6c3abec1c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s6x22,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6x22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-15 20:59:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:08.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-7922" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":20,"skipped":531,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 20:59:08.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83" in namespace "projected-5606" to be "Succeeded or Failed"

    Sep 15 20:59:08.267: INFO: Pod "downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83": Phase="Pending", Reason="", readiness=false. Elapsed: 5.125131ms
    Sep 15 20:59:10.271: INFO: Pod "downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00904074s
    Sep 15 20:59:12.276: INFO: Pod "downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014389091s
    STEP: Saw pod success
    Sep 15 20:59:12.276: INFO: Pod "downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:12.279: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83 container client-container: <nil>
    STEP: delete the pod
    Sep 15 20:59:12.300: INFO: Waiting for pod downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83 to disappear
    Sep 15 20:59:12.303: INFO: Pod downwardapi-volume-bcbe2678-ae81-4a64-be94-431ec64a4a83 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:12.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5606" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":542,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:14.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6303" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":66,"skipped":1005,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:14.964: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-38295642-938c-4f10-9ab5-a7d344c7c1f6
    STEP: Creating a pod to test consume secrets
    Sep 15 20:59:15.054: INFO: Waiting up to 5m0s for pod "pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916" in namespace "secrets-5341" to be "Succeeded or Failed"

    Sep 15 20:59:15.060: INFO: Pod "pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171967ms
    Sep 15 20:59:17.064: INFO: Pod "pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010193442s
    STEP: Saw pod success
    Sep 15 20:59:17.064: INFO: Pod "pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:17.067: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:59:17.080: INFO: Waiting for pod pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916 to disappear
    Sep 15 20:59:17.083: INFO: Pod pod-secrets-d7292c89-e618-4ab6-9ebb-800589771916 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:17.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5341" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1020,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:17.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-8216" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":68,"skipped":1039,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep 15 20:59:09.224: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:21.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-9552" for this suite.
    STEP: Destroying namespace "webhook-9552-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":38,"skipped":619,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:26.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-9263" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":549,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:26.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-5747" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":39,"skipped":621,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:26.450: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-041f0f00-5de4-4f85-bdf6-0480c492a7d4
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:59:26.497: INFO: Waiting up to 5m0s for pod "pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c" in namespace "configmap-7215" to be "Succeeded or Failed"

    Sep 15 20:59:26.501: INFO: Pod "pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798093ms
    Sep 15 20:59:28.506: INFO: Pod "pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008945428s
    STEP: Saw pod success
    Sep 15 20:59:28.506: INFO: Pod "pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:28.509: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep 15 20:59:28.524: INFO: Waiting for pod pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c to disappear
    Sep 15 20:59:28.527: INFO: Pod pod-configmaps-56330ee8-5cfe-4223-86d8-673fa4119c2c no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:28.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7215" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":552,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:31.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1980" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":69,"skipped":1041,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:33.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-9025" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":563,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:31.587: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-ecba119a-ad51-4d77-9715-9d920e50f7d8
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:59:31.636: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63" in namespace "configmap-8442" to be "Succeeded or Failed"

    Sep 15 20:59:31.641: INFO: Pod "pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63": Phase="Pending", Reason="", readiness=false. Elapsed: 5.60408ms
    Sep 15 20:59:33.644: INFO: Pod "pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008649687s
    STEP: Saw pod success
    Sep 15 20:59:33.644: INFO: Pod "pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:33.647: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:59:33.663: INFO: Waiting for pod pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63 to disappear
    Sep 15 20:59:33.666: INFO: Pod pod-configmaps-1ddcaa9c-6c53-4d95-82ef-7c16b1255c63 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:33.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8442" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1064,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:33.418: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep 15 20:59:33.454: INFO: Waiting up to 5m0s for pod "pod-131b22a5-ca42-4c89-9ac6-d2358995192c" in namespace "emptydir-9334" to be "Succeeded or Failed"

    Sep 15 20:59:33.460: INFO: Pod "pod-131b22a5-ca42-4c89-9ac6-d2358995192c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.606105ms
    Sep 15 20:59:35.464: INFO: Pod "pod-131b22a5-ca42-4c89-9ac6-d2358995192c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00997826s
    STEP: Saw pod success
    Sep 15 20:59:35.464: INFO: Pod "pod-131b22a5-ca42-4c89-9ac6-d2358995192c" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:35.467: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-131b22a5-ca42-4c89-9ac6-d2358995192c container test-container: <nil>
    STEP: delete the pod
    Sep 15 20:59:35.479: INFO: Waiting for pod pod-131b22a5-ca42-4c89-9ac6-d2358995192c to disappear
    Sep 15 20:59:35.481: INFO: Pod pod-131b22a5-ca42-4c89-9ac6-d2358995192c no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:35.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-9334" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":564,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:38.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7893" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1093,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-4027-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":26,"skipped":578,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-5780" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":27,"skipped":583,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:44.336: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename gc
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:45.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-256" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":28,"skipped":583,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:46.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-3466" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":29,"skipped":584,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:00.307: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep 15 20:59:00.350: INFO: PodSpec: initContainers in spec.initContainers
    Sep 15 20:59:47.732: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6d0fa39b-751b-4797-8d8e-357477be3fd2", GenerateName:"", Namespace:"init-container-6538", SelfLink:"", UID:"160b11dc-ef56-458e-bc60-ea8ee370e5b1", ResourceVersion:"11520", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63798872340, loc:(*time.Location)(0x9e363e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"350088457"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00430f638), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00430f650)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00430f668), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00430f680)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-9684d", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00427afc0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9684d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9684d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-9684d", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037bfe70), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-soloe4-worker-3bhzw2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002286d20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037bfef0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037bff10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037bff18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037bff1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00430b720), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798872340, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798872340, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798872340, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798872340, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.52", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.52"}}, StartTime:(*v1.Time)(0xc00430f6b0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002286e00)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002286e70)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://a4cf65b5fe06fba753d24ac23fff0546642d2114fecff31e3a8a1c3f07d0abee", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00427b040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00427b020), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0037bff9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:47.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-6538" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":28,"skipped":418,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:47.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-6373" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":29,"skipped":419,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:47.867: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:53.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-9099" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":30,"skipped":449,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:53.946: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-abe90275-d8aa-4b30-adc5-0c3412595071
    STEP: Creating a pod to test consume configMaps
    Sep 15 20:59:53.987: INFO: Waiting up to 5m0s for pod "pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07" in namespace "configmap-1082" to be "Succeeded or Failed"

    Sep 15 20:59:53.990: INFO: Pod "pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.96028ms
    Sep 15 20:59:55.994: INFO: Pod "pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007022324s
    STEP: Saw pod success
    Sep 15 20:59:55.994: INFO: Pod "pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:55.997: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:59:56.014: INFO: Waiting for pod pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07 to disappear
    Sep 15 20:59:56.018: INFO: Pod pod-configmaps-83a44534-68ab-4afc-9c4a-aba4c9ba5a07 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:56.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1082" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":464,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 20:59:56.040: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep 15 20:59:56.078: INFO: Waiting up to 5m0s for pod "client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4" in namespace "containers-3466" to be "Succeeded or Failed"

    Sep 15 20:59:56.082: INFO: Pod "client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304944ms
    Sep 15 20:59:58.086: INFO: Pod "client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006909329s
    STEP: Saw pod success
    Sep 15 20:59:58.086: INFO: Pod "client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4" satisfied condition "Succeeded or Failed"

    Sep 15 20:59:58.089: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 20:59:58.107: INFO: Waiting for pod client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4 to disappear
    Sep 15 20:59:58.110: INFO: Pod client-containers-1cb8c7a7-65a6-4407-b62e-70f1624619b4 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 20:59:58.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3466" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":472,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-5099" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":628,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:15.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5313" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":31,"skipped":725,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:16.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8144" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":32,"skipped":749,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:00:16.043: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename endpointslice
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:20.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-545" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":33,"skipped":749,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:27.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4350" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":34,"skipped":756,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 54 lines ...
    STEP: Destroying namespace "services-4090" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":33,"skipped":482,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 21:00:27.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493" in namespace "downward-api-1334" to be "Succeeded or Failed"

    Sep 15 21:00:27.534: INFO: Pod "downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356142ms
    Sep 15 21:00:29.538: INFO: Pod "downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008652042s
    STEP: Saw pod success
    Sep 15 21:00:29.539: INFO: Pod "downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493" satisfied condition "Succeeded or Failed"

    Sep 15 21:00:29.541: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493 container client-container: <nil>
    STEP: delete the pod
    Sep 15 21:00:29.559: INFO: Waiting for pod downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493 to disappear
    Sep 15 21:00:29.562: INFO: Pod downwardapi-volume-3ef1f88f-ab2d-448b-b44f-dd25dd929493 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:29.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1334" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":774,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:29.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-5548" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":36,"skipped":782,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-9100-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":37,"skipped":814,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:00:43.414: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-b80c08d5-f169-4406-941b-816aa401a4e0
    STEP: Creating a pod to test consume secrets
    Sep 15 21:00:43.501: INFO: Waiting up to 5m0s for pod "pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3" in namespace "secrets-3783" to be "Succeeded or Failed"

    Sep 15 21:00:43.725: INFO: Pod "pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 224.436673ms
    Sep 15 21:00:45.730: INFO: Pod "pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22916547s
    STEP: Saw pod success
    Sep 15 21:00:45.730: INFO: Pod "pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3" satisfied condition "Succeeded or Failed"

    Sep 15 21:00:45.733: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 21:00:45.749: INFO: Waiting for pod pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3 to disappear
    Sep 15 21:00:45.752: INFO: Pod pod-secrets-f5252040-bf1a-4ff1-967a-7ca055af9cc3 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:45.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3783" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":828,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 21:00:45.837: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393" in namespace "downward-api-7532" to be "Succeeded or Failed"

    Sep 15 21:00:45.841: INFO: Pod "downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393": Phase="Pending", Reason="", readiness=false. Elapsed: 3.496128ms
    Sep 15 21:00:47.846: INFO: Pod "downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008929992s
    STEP: Saw pod success
    Sep 15 21:00:47.846: INFO: Pod "downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393" satisfied condition "Succeeded or Failed"

    Sep 15 21:00:47.853: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393 container client-container: <nil>
    STEP: delete the pod
    Sep 15 21:00:47.877: INFO: Waiting for pod downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393 to disappear
    Sep 15 21:00:47.880: INFO: Pod downwardapi-volume-30672162-fe05-4387-aaa4-ee41a4be4393 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:47.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7532" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":852,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:00:53.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-1826" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":40,"skipped":864,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:01:16.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5504" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":40,"skipped":634,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:01:21.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4279" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":41,"skipped":644,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:01:24.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-7127" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":41,"skipped":868,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 21:01:21.196: INFO: The status of Pod server-envvars-6ae58a3e-ff57-4762-b408-4792615a64b7 is Pending, waiting for it to be Running (with Ready = true)
    Sep 15 21:01:23.201: INFO: The status of Pod server-envvars-6ae58a3e-ff57-4762-b408-4792615a64b7 is Running (Ready = true)
    Sep 15 21:01:23.223: INFO: Waiting up to 5m0s for pod "client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4" in namespace "pods-831" to be "Succeeded or Failed"

    Sep 15 21:01:23.231: INFO: Pod "client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.935653ms
    Sep 15 21:01:25.235: INFO: Pod "client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012387459s
    STEP: Saw pod success
    Sep 15 21:01:25.235: INFO: Pod "client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4" satisfied condition "Succeeded or Failed"

    Sep 15 21:01:25.238: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4 container env3cont: <nil>
    STEP: delete the pod
    Sep 15 21:01:25.261: INFO: Waiting for pod client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4 to disappear
    Sep 15 21:01:25.263: INFO: Pod client-envvars-8ac70093-ca35-4f43-806b-5c3c8a2723e4 no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:01:25.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-831" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":647,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:01:25.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-7060" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":679,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:01:31.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6860" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":44,"skipped":716,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:02:01.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-1202" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":42,"skipped":897,"failed":7,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 15 21:01:35.834: INFO: File wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:35.838: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:35.838: INFO: Lookups using dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 failed for: [wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:01:40.843: INFO: File wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:40.847: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:40.848: INFO: Lookups using dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 failed for: [wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:01:45.844: INFO: File wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:45.848: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:45.848: INFO: Lookups using dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 failed for: [wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:01:50.842: INFO: File wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:50.847: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:50.847: INFO: Lookups using dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 failed for: [wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:01:55.844: INFO: File wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:55.848: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:01:55.848: INFO: Lookups using dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 failed for: [wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:02:00.844: INFO: File wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:02:00.848: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 contains 'foo.example.com.
    ' instead of 'bar.example.com.'
    Sep 15 21:02:00.849: INFO: Lookups using dns-2112/dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 failed for: [wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:02:05.851: INFO: DNS probes using dns-test-25d6b1a8-44af-4c24-b94f-57afa2171200 succeeded
    
    STEP: deleting the pod
    STEP: changing the service to type=ClusterIP
    STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2112.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2112.svc.cluster.local; sleep 1; done
... skipping 2 lines ...
    
    STEP: creating a third pod to probe DNS
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep 15 21:02:07.926: INFO: File jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local from pod  dns-2112/dns-test-ff700b6c-73c5-4988-a90f-1fd19c2b6167 contains '' instead of '10.142.192.119'
    Sep 15 21:02:07.926: INFO: Lookups using dns-2112/dns-test-ff700b6c-73c5-4988-a90f-1fd19c2b6167 failed for: [jessie_udp@dns-test-service-3.dns-2112.svc.cluster.local]

    
    Sep 15 21:02:12.935: INFO: DNS probes using dns-test-ff700b6c-73c5-4988-a90f-1fd19c2b6167 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test externalName service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:02:12.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-2112" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":45,"skipped":747,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:02:39.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6407" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":46,"skipped":793,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:02:50.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-5083" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":47,"skipped":796,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:02:57.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-3528" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":48,"skipped":800,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-cz2p
    STEP: Creating a pod to test atomic-volume-subpath
    Sep 15 21:02:57.285: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cz2p" in namespace "subpath-6434" to be "Succeeded or Failed"

    Sep 15 21:02:57.288: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126626ms
    Sep 15 21:02:59.293: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 2.006796695s
    Sep 15 21:03:01.298: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 4.011508657s
    Sep 15 21:03:03.302: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 6.015667415s
    Sep 15 21:03:05.306: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 8.020372901s
    Sep 15 21:03:07.311: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 10.024901703s
    Sep 15 21:03:09.315: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 12.02936792s
    Sep 15 21:03:11.320: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 14.034286472s
    Sep 15 21:03:13.325: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 16.0386198s
    Sep 15 21:03:15.334: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 18.048108035s
    Sep 15 21:03:17.338: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Running", Reason="", readiness=true. Elapsed: 20.05223129s
    Sep 15 21:03:19.344: INFO: Pod "pod-subpath-test-downwardapi-cz2p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.057513582s
    STEP: Saw pod success
    Sep 15 21:03:19.344: INFO: Pod "pod-subpath-test-downwardapi-cz2p" satisfied condition "Succeeded or Failed"

    Sep 15 21:03:19.347: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-subpath-test-downwardapi-cz2p container test-container-subpath-downwardapi-cz2p: <nil>
    STEP: delete the pod
    Sep 15 21:03:19.373: INFO: Waiting for pod pod-subpath-test-downwardapi-cz2p to disappear
    Sep 15 21:03:19.376: INFO: Pod pod-subpath-test-downwardapi-cz2p no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-cz2p
    Sep 15 21:03:19.376: INFO: Deleting pod "pod-subpath-test-downwardapi-cz2p" in namespace "subpath-6434"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:19.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-6434" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":49,"skipped":801,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:21.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-233" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":50,"skipped":818,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:21.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-3480" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":51,"skipped":851,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:03:21.441: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep 15 21:03:21.477: INFO: Waiting up to 5m0s for pod "test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7" in namespace "svcaccounts-4950" to be "Succeeded or Failed"

    Sep 15 21:03:21.480: INFO: Pod "test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.644549ms
    Sep 15 21:03:23.485: INFO: Pod "test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008273049s
    STEP: Saw pod success
    Sep 15 21:03:23.485: INFO: Pod "test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7" satisfied condition "Succeeded or Failed"

    Sep 15 21:03:23.488: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 21:03:23.509: INFO: Waiting for pod test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7 to disappear
    Sep 15 21:03:23.512: INFO: Pod test-pod-c122ef4e-3b16-4eb7-96ba-f682decd06a7 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:23.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4950" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":52,"skipped":853,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.898 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":72,"skipped":1101,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:03:41.251: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-a06af8bd-8029-4ae3-8636-99d6cdd40ea8
    STEP: Creating a pod to test consume secrets
    Sep 15 21:03:41.298: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b" in namespace "projected-5351" to be "Succeeded or Failed"

    Sep 15 21:03:41.302: INFO: Pod "pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.009697ms
    Sep 15 21:03:43.306: INFO: Pod "pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006924195s
    STEP: Saw pod success
    Sep 15 21:03:43.306: INFO: Pod "pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b" satisfied condition "Succeeded or Failed"

    Sep 15 21:03:43.309: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 21:03:43.325: INFO: Waiting for pod pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b to disappear
    Sep 15 21:03:43.327: INFO: Pod pod-projected-secrets-8c478460-17b8-4caf-8536-9ec8ec91319b no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:43.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5351" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1124,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:45.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-5020" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":74,"skipped":1144,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:03:51.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-1210" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":75,"skipped":1199,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-3496" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":76,"skipped":1234,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:04.787: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-7793b4ba-85f4-44e2-bcee-7eb1775fbed2
    STEP: Creating a pod to test consume secrets
    Sep 15 21:04:04.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a" in namespace "projected-3669" to be "Succeeded or Failed"

    Sep 15 21:04:04.865: INFO: Pod "pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.43978ms
    Sep 15 21:04:06.870: INFO: Pod "pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008833904s
    STEP: Saw pod success
    Sep 15 21:04:06.871: INFO: Pod "pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:06.875: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 21:04:06.899: INFO: Waiting for pod pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a to disappear
    Sep 15 21:04:06.901: INFO: Pod pod-projected-secrets-5572fbff-f36d-40c2-a7a4-c6db3bf0bc9a no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:06.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3669" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":77,"skipped":1259,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:06.992: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-a6435602-6c40-47d7-b440-05ef5f8ca56e
    STEP: Creating a pod to test consume configMaps
    Sep 15 21:04:07.042: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff" in namespace "projected-3378" to be "Succeeded or Failed"

    Sep 15 21:04:07.045: INFO: Pod "pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.533843ms
    Sep 15 21:04:09.051: INFO: Pod "pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008776474s
    STEP: Saw pod success
    Sep 15 21:04:09.051: INFO: Pod "pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:09.054: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:09.069: INFO: Waiting for pod pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff to disappear
    Sep 15 21:04:09.072: INFO: Pod pod-projected-configmaps-688d90b1-14f4-4055-b63d-d9359d7dbfff no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:09.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3378" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1313,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 21:04:09.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d" in namespace "projected-8156" to be "Succeeded or Failed"

    Sep 15 21:04:09.135: INFO: Pod "downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.631363ms
    Sep 15 21:04:11.139: INFO: Pod "downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009116451s
    STEP: Saw pod success
    Sep 15 21:04:11.139: INFO: Pod "downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:11.143: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f pod downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d container client-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:11.164: INFO: Waiting for pod downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d to disappear
    Sep 15 21:04:11.167: INFO: Pod downwardapi-volume-442dc0e8-418c-46a8-9a6f-39f28dc8b28d no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:11.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8156" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":79,"skipped":1319,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep 15 21:04:13.762: INFO: Successfully updated pod "pod-update-activedeadlineseconds-dba74b6a-fcb8-4608-b376-70ce6f6fb0d1"
    Sep 15 21:04:13.762: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-dba74b6a-fcb8-4608-b376-70ce6f6fb0d1" in namespace "pods-6471" to be "terminated due to deadline exceeded"
    Sep 15 21:04:13.765: INFO: Pod "pod-update-activedeadlineseconds-dba74b6a-fcb8-4608-b376-70ce6f6fb0d1": Phase="Running", Reason="", readiness=true. Elapsed: 2.873195ms
    Sep 15 21:04:15.771: INFO: Pod "pod-update-activedeadlineseconds-dba74b6a-fcb8-4608-b376-70ce6f6fb0d1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008212766s
    Sep 15 21:04:17.774: INFO: Pod "pod-update-activedeadlineseconds-dba74b6a-fcb8-4608-b376-70ce6f6fb0d1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.012037623s

    Sep 15 21:04:17.774: INFO: Pod "pod-update-activedeadlineseconds-dba74b6a-fcb8-4608-b376-70ce6f6fb0d1" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:17.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6471" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1330,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:17.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-9596" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":81,"skipped":1356,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:19.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1094" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":82,"skipped":1361,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: creating replication controller affinity-clusterip in namespace services-8486
    I0915 21:02:01.736955      14 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8486, replica count: 3
    I0915 21:02:04.788483      14 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep 15 21:02:04.794: INFO: Creating new exec pod
    Sep 15 21:02:07.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:10.011: INFO: rc: 1
    Sep 15 21:02:10.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:11.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:13.196: INFO: rc: 1
    Sep 15 21:02:13.196: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:14.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:16.181: INFO: rc: 1
    Sep 15 21:02:16.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:17.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:19.201: INFO: rc: 1
    Sep 15 21:02:19.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:20.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:22.272: INFO: rc: 1
    Sep 15 21:02:22.272: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:23.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:25.221: INFO: rc: 1
    Sep 15 21:02:25.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:26.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:28.203: INFO: rc: 1
    Sep 15 21:02:28.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:29.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:31.202: INFO: rc: 1
    Sep 15 21:02:31.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:32.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:34.216: INFO: rc: 1
    Sep 15 21:02:34.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:35.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:37.188: INFO: rc: 1
    Sep 15 21:02:37.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:38.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:40.199: INFO: rc: 1
    Sep 15 21:02:40.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:41.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:43.184: INFO: rc: 1
    Sep 15 21:02:43.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:44.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:46.186: INFO: rc: 1
    Sep 15 21:02:46.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:47.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:49.236: INFO: rc: 1
    Sep 15 21:02:49.236: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:50.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:52.205: INFO: rc: 1
    Sep 15 21:02:52.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:53.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:55.201: INFO: rc: 1
    Sep 15 21:02:55.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:56.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:02:58.182: INFO: rc: 1
    Sep 15 21:02:58.182: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:02:59.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:01.189: INFO: rc: 1
    Sep 15 21:03:01.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:02.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:04.174: INFO: rc: 1
    Sep 15 21:03:04.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:05.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:07.201: INFO: rc: 1
    Sep 15 21:03:07.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:08.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:10.199: INFO: rc: 1
    Sep 15 21:03:10.199: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:11.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:13.185: INFO: rc: 1
    Sep 15 21:03:13.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:14.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:16.185: INFO: rc: 1
    Sep 15 21:03:16.185: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:17.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:19.188: INFO: rc: 1
    Sep 15 21:03:19.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:20.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:22.218: INFO: rc: 1
    Sep 15 21:03:22.218: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-clusterip 80
    + echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:23.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:25.179: INFO: rc: 1
    Sep 15 21:03:25.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:26.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:28.195: INFO: rc: 1
    Sep 15 21:03:28.195: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:29.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:31.203: INFO: rc: 1
    Sep 15 21:03:31.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:32.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:34.202: INFO: rc: 1
    Sep 15 21:03:34.202: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:35.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:37.189: INFO: rc: 1
    Sep 15 21:03:37.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:38.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:40.194: INFO: rc: 1
    Sep 15 21:03:40.194: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:41.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:43.189: INFO: rc: 1
    Sep 15 21:03:43.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:44.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:46.184: INFO: rc: 1
    Sep 15 21:03:46.184: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:47.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:49.201: INFO: rc: 1
    Sep 15 21:03:49.201: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:50.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:52.203: INFO: rc: 1
    Sep 15 21:03:52.203: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:53.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:55.189: INFO: rc: 1
    Sep 15 21:03:55.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:56.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:03:58.176: INFO: rc: 1
    Sep 15 21:03:58.176: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:03:59.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:04:01.197: INFO: rc: 1
    Sep 15 21:04:01.197: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:04:02.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:04:04.179: INFO: rc: 1
    Sep 15 21:04:04.179: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:04:05.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:04:07.189: INFO: rc: 1
    Sep 15 21:04:07.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:04:08.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:04:10.213: INFO: rc: 1
    Sep 15 21:04:10.213: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-clusterip 80
    + echo hostName
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:04:10.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
    Sep 15 21:04:12.427: INFO: rc: 1
    Sep 15 21:04:12.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8486 exec execpod-affinityr9kwn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-clusterip 80
    nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:04:12.428: FAIL: Unexpected error:

        <*errors.errorString | 0xc000a3af20>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [142.954 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 21:04:12.428: Unexpected error:

          <*errors.errorString | 0xc000a3af20>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol
      occurred
    
... skipping 50 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:41.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-8263" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":83,"skipped":1381,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:41.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-8477" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":84,"skipped":1394,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:41.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":85,"skipped":1437,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:41.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-642" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":86,"skipped":1461,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:41.472: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep 15 21:04:41.506: INFO: Waiting up to 5m0s for pod "pod-2592183a-f219-4753-8dce-d604216ab81e" in namespace "emptydir-5086" to be "Succeeded or Failed"

    Sep 15 21:04:41.510: INFO: Pod "pod-2592183a-f219-4753-8dce-d604216ab81e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224254ms
    Sep 15 21:04:43.514: INFO: Pod "pod-2592183a-f219-4753-8dce-d604216ab81e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008033902s
    STEP: Saw pod success
    Sep 15 21:04:43.514: INFO: Pod "pod-2592183a-f219-4753-8dce-d604216ab81e" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:43.517: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-2592183a-f219-4753-8dce-d604216ab81e container test-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:43.537: INFO: Waiting for pod pod-2592183a-f219-4753-8dce-d604216ab81e to disappear
    Sep 15 21:04:43.540: INFO: Pod pod-2592183a-f219-4753-8dce-d604216ab81e no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:43.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-5086" for this suite.
    
    •
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":42,"skipped":940,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:24.645: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
    STEP: Destroying namespace "services-6349" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":940,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":87,"skipped":1491,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:43.550: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep 15 21:04:43.583: INFO: Waiting up to 5m0s for pod "pod-b2a6fdfa-61d5-465b-825d-83d95c340b05" in namespace "emptydir-3092" to be "Succeeded or Failed"

    Sep 15 21:04:43.587: INFO: Pod "pod-b2a6fdfa-61d5-465b-825d-83d95c340b05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244087ms
    Sep 15 21:04:45.591: INFO: Pod "pod-b2a6fdfa-61d5-465b-825d-83d95c340b05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007824044s
    STEP: Saw pod success
    Sep 15 21:04:45.591: INFO: Pod "pod-b2a6fdfa-61d5-465b-825d-83d95c340b05" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:45.594: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-b2a6fdfa-61d5-465b-825d-83d95c340b05 container test-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:45.609: INFO: Waiting for pod pod-b2a6fdfa-61d5-465b-825d-83d95c340b05 to disappear
    Sep 15 21:04:45.614: INFO: Pod pod-b2a6fdfa-61d5-465b-825d-83d95c340b05 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:45.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-3092" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":88,"skipped":1491,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:45.647: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep 15 21:04:45.684: INFO: Waiting up to 5m0s for pod "var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8" in namespace "var-expansion-2682" to be "Succeeded or Failed"

    Sep 15 21:04:45.690: INFO: Pod "var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.732973ms
    Sep 15 21:04:47.694: INFO: Pod "var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010227369s
    STEP: Saw pod success
    Sep 15 21:04:47.694: INFO: Pod "var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:47.698: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8 container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:47.715: INFO: Waiting for pod var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8 to disappear
    Sep 15 21:04:47.718: INFO: Pod var-expansion-bb443ed1-d026-4bbc-a89c-baa019d586b8 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:47.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-2682" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":89,"skipped":1505,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:47.756: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep 15 21:04:47.811: INFO: Waiting up to 5m0s for pod "pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375" in namespace "emptydir-1765" to be "Succeeded or Failed"

    Sep 15 21:04:47.818: INFO: Pod "pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375": Phase="Pending", Reason="", readiness=false. Elapsed: 6.76789ms
    Sep 15 21:04:49.824: INFO: Pod "pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012713662s
    STEP: Saw pod success
    Sep 15 21:04:49.824: INFO: Pod "pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:49.827: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375 container test-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:49.850: INFO: Waiting for pod pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375 to disappear
    Sep 15 21:04:49.853: INFO: Pod pod-f4ae772f-d1ea-4613-b6d1-d2d04276c375 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:49.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1765" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":90,"skipped":1521,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:49.864: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep 15 21:04:49.903: INFO: Waiting up to 5m0s for pod "pod-3438dba6-63ae-470d-9e18-43cc21635e99" in namespace "emptydir-2720" to be "Succeeded or Failed"

    Sep 15 21:04:49.907: INFO: Pod "pod-3438dba6-63ae-470d-9e18-43cc21635e99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516627ms
    Sep 15 21:04:51.911: INFO: Pod "pod-3438dba6-63ae-470d-9e18-43cc21635e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008195544s
    STEP: Saw pod success
    Sep 15 21:04:51.911: INFO: Pod "pod-3438dba6-63ae-470d-9e18-43cc21635e99" satisfied condition "Succeeded or Failed"

    Sep 15 21:04:51.915: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod pod-3438dba6-63ae-470d-9e18-43cc21635e99 container test-container: <nil>
    STEP: delete the pod
    Sep 15 21:04:51.932: INFO: Waiting for pod pod-3438dba6-63ae-470d-9e18-43cc21635e99 to disappear
    Sep 15 21:04:51.935: INFO: Pod pod-3438dba6-63ae-470d-9e18-43cc21635e99 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:04:51.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2720" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":91,"skipped":1522,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:04:51.947: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pod-network-test
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:05:18.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-6441" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":92,"skipped":1522,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:05:18.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-3667" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":93,"skipped":1528,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:05:41.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-2387" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":94,"skipped":1530,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:05:41.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3944" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":95,"skipped":1540,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
    STEP: creating replication controller externalname-service in namespace services-7430
    I0915 21:05:41.393093      15 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7430, replica count: 2
    I0915 21:05:44.444739      15 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep 15 21:05:44.444: INFO: Creating new exec pod
    Sep 15 21:05:47.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 15 21:05:49.649: INFO: rc: 1
    Sep 15 21:05:49.649: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 externalname-service 80
    nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:05:50.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 15 21:05:52.839: INFO: rc: 1
    Sep 15 21:05:52.839: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 externalname-service 80
    nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:05:53.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 15 21:05:55.832: INFO: rc: 1
    Sep 15 21:05:55.832: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 externalname-service 80
    + echo hostName
    nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:05:56.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 15 21:05:58.870: INFO: rc: 1
    Sep 15 21:05:58.870: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 externalname-service 80
    nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:05:59.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
    Sep 15 21:05:59.825: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
    Sep 15 21:05:59.825: INFO: stdout: "externalname-service-j7wph"
    Sep 15 21:05:59.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.235.203 80'
    Sep 15 21:06:01.979: INFO: rc: 1
    Sep 15 21:06:01.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.235.203 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 10.130.235.203 80
    nc: connect to 10.130.235.203 port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep 15 21:06:02.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7430 exec execpodzr8bm -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.235.203 80'
    Sep 15 21:06:03.148: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.235.203 80\nConnection to 10.130.235.203 80 port [tcp/http] succeeded!\n"
    Sep 15 21:06:03.148: INFO: stdout: "externalname-service-j7wph"
    Sep 15 21:06:03.148: INFO: Cleaning up the ExternalName to ClusterIP test service
... skipping 3 lines ...
    STEP: Destroying namespace "services-7430" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":96,"skipped":1587,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 65 lines ...
    STEP: Destroying namespace "services-4021" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":44,"skipped":943,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:06:07.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7408" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":97,"skipped":1616,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:06:07.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-2806" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":98,"skipped":1641,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 21:06:07.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a" in namespace "downward-api-5924" to be "Succeeded or Failed"

    Sep 15 21:06:07.593: INFO: Pod "downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733796ms
    Sep 15 21:06:09.597: INFO: Pod "downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006530842s
    STEP: Saw pod success
    Sep 15 21:06:09.597: INFO: Pod "downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a" satisfied condition "Succeeded or Failed"

    Sep 15 21:06:09.600: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a container client-container: <nil>
    STEP: delete the pod
    Sep 15 21:06:09.626: INFO: Waiting for pod downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a to disappear
    Sep 15 21:06:09.629: INFO: Pod downwardapi-volume-82628d1f-b3b0-4c3b-ad3d-472130ce8c1a no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:06:09.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-5924" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":99,"skipped":1653,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-9400-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":45,"skipped":966,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:06:11.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6091" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":100,"skipped":1654,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
    STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
    Sep 15 21:06:26.072: INFO: Waiting for webhook configuration to be ready...
    Sep 15 21:06:36.183: INFO: Waiting for webhook configuration to be ready...
    Sep 15 21:06:46.286: INFO: Waiting for webhook configuration to be ready...
    Sep 15 21:06:56.383: INFO: Waiting for webhook configuration to be ready...
    Sep 15 21:07:06.394: INFO: Waiting for webhook configuration to be ready...
    Sep 15 21:07:06.394: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc0002be280>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 21:07:06.394: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc0002be280>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1361
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":100,"skipped":1660,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:07:06.473: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-9784-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":101,"skipped":1660,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep 15 21:07:10.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05" in namespace "downward-api-7122" to be "Succeeded or Failed"

    Sep 15 21:07:10.197: INFO: Pod "downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.994975ms
    Sep 15 21:07:12.202: INFO: Pod "downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00892147s
    STEP: Saw pod success
    Sep 15 21:07:12.202: INFO: Pod "downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05" satisfied condition "Succeeded or Failed"

    Sep 15 21:07:12.205: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-w58p08 pod downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05 container client-container: <nil>
    STEP: delete the pod
    Sep 15 21:07:12.220: INFO: Waiting for pod downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05 to disappear
    Sep 15 21:07:12.224: INFO: Pod downwardapi-volume-c209143a-e664-4c8a-8982-1276068e8b05 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:07:12.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7122" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":102,"skipped":1663,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:07:13.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4887" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":103,"skipped":1732,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-4555" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":104,"skipped":1775,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:07:15.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7758" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":105,"skipped":1783,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:07:18.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-1776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":106,"skipped":1788,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:07:41.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-0" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":46,"skipped":1010,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 38 lines ...
    Sep 15 21:00:35.249: INFO: stderr: ""
    Sep 15 21:00:35.249: INFO: stdout: "true"
    Sep 15 21:00:35.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5640 get pods update-demo-nautilus-f8gpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 15 21:00:35.346: INFO: stderr: ""
    Sep 15 21:00:35.346: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 15 21:00:35.346: INFO: validating pod update-demo-nautilus-f8gpn
    Sep 15 21:04:08.323: INFO: update-demo-nautilus-f8gpn is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-f8gpn)

    Sep 15 21:04:13.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 15 21:04:13.412: INFO: stderr: ""
    Sep 15 21:04:13.412: INFO: stdout: "update-demo-nautilus-btr2x update-demo-nautilus-f8gpn "
    Sep 15 21:04:13.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5640 get pods update-demo-nautilus-btr2x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 15 21:04:13.502: INFO: stderr: ""
    Sep 15 21:04:13.502: INFO: stdout: "true"
... skipping 11 lines ...
    Sep 15 21:04:13.686: INFO: stderr: ""
    Sep 15 21:04:13.686: INFO: stdout: "true"
    Sep 15 21:04:13.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5640 get pods update-demo-nautilus-f8gpn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 15 21:04:13.777: INFO: stderr: ""
    Sep 15 21:04:13.777: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 15 21:04:13.777: INFO: validating pod update-demo-nautilus-f8gpn
    Sep 15 21:07:47.459: INFO: update-demo-nautilus-f8gpn is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-f8gpn)

    Sep 15 21:07:52.461: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311 +0x29b
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000dfa480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 159 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:07:56.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1631" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":47,"skipped":1053,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:10.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-7268" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":107,"skipped":1809,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep 15 21:07:58.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:07:58.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:07:58.912: INFO: Unable to read jessie_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:07:58.915: INFO: Unable to read jessie_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:07:58.918: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:07:58.921: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:07:58.946: INFO: Lookups using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 failed for: [wheezy_udp@dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_udp@dns-test-service.dns-9283.svc.cluster.local jessie_tcp@dns-test-service.dns-9283.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local]

    
    Sep 15 21:08:03.957: INFO: Unable to read wheezy_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:03.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:03.965: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:03.970: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:03.998: INFO: Unable to read jessie_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:04.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:04.006: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:04.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:04.031: INFO: Lookups using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 failed for: [wheezy_udp@dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_udp@dns-test-service.dns-9283.svc.cluster.local jessie_tcp@dns-test-service.dns-9283.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local]

    
    Sep 15 21:08:08.952: INFO: Unable to read wheezy_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.960: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.963: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.985: INFO: Unable to read jessie_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.992: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:08.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:09.015: INFO: Lookups using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 failed for: [wheezy_udp@dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_udp@dns-test-service.dns-9283.svc.cluster.local jessie_tcp@dns-test-service.dns-9283.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local]

    
    Sep 15 21:08:13.951: INFO: Unable to read wheezy_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.955: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.958: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.962: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.982: INFO: Unable to read jessie_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.986: INFO: Unable to read jessie_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:13.992: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:14.010: INFO: Lookups using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 failed for: [wheezy_udp@dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_udp@dns-test-service.dns-9283.svc.cluster.local jessie_tcp@dns-test-service.dns-9283.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local]

    
    Sep 15 21:08:18.952: INFO: Unable to read wheezy_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:18.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:18.960: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:18.964: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:18.992: INFO: Unable to read jessie_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:18.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:19.000: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:19.004: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:19.025: INFO: Lookups using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 failed for: [wheezy_udp@dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_udp@dns-test-service.dns-9283.svc.cluster.local jessie_tcp@dns-test-service.dns-9283.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local]

    
    Sep 15 21:08:23.952: INFO: Unable to read wheezy_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:23.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:23.960: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:23.964: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:23.997: INFO: Unable to read jessie_udp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:24.001: INFO: Unable to read jessie_tcp@dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:24.005: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:24.009: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local from pod dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291: the server could not find the requested resource (get pods dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291)
    Sep 15 21:08:24.036: INFO: Lookups using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 failed for: [wheezy_udp@dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@dns-test-service.dns-9283.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_udp@dns-test-service.dns-9283.svc.cluster.local jessie_tcp@dns-test-service.dns-9283.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9283.svc.cluster.local]

    
    Sep 15 21:08:29.014: INFO: DNS probes using dns-9283/dns-test-d1b6f713-5bb1-4fac-9596-ebec250ff291 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:29.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9283" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":48,"skipped":1057,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-3264" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":49,"skipped":1062,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:08:29.240: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 21:08:29.281: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5d82eff3-d7ca-46e1-8f69-8685e1e32fdd" in namespace "security-context-test-8413" to be "Succeeded or Failed"

    Sep 15 21:08:29.284: INFO: Pod "busybox-privileged-false-5d82eff3-d7ca-46e1-8f69-8685e1e32fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713155ms
    Sep 15 21:08:31.293: INFO: Pod "busybox-privileged-false-5d82eff3-d7ca-46e1-8f69-8685e1e32fdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011689857s
    Sep 15 21:08:31.293: INFO: Pod "busybox-privileged-false-5d82eff3-d7ca-46e1-8f69-8685e1e32fdd" satisfied condition "Succeeded or Failed"

    Sep 15 21:08:31.309: INFO: Got logs for pod "busybox-privileged-false-5d82eff3-d7ca-46e1-8f69-8685e1e32fdd": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:31.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-8413" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1093,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:41.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7533" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":51,"skipped":1094,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:08:41.770: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep 15 21:08:41.824: INFO: Waiting up to 5m0s for pod "client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98" in namespace "containers-3116" to be "Succeeded or Failed"

    Sep 15 21:08:41.830: INFO: Pod "client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645937ms
    Sep 15 21:08:43.835: INFO: Pod "client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011264161s
    STEP: Saw pod success
    Sep 15 21:08:43.835: INFO: Pod "client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98" satisfied condition "Succeeded or Failed"

    Sep 15 21:08:43.838: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 21:08:43.864: INFO: Waiting for pod client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98 to disappear
    Sep 15 21:08:43.868: INFO: Pod client-containers-44601b07-56dd-4753-91bc-7efcfcdefe98 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-3116" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":1136,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:08:43.967: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep 15 21:08:44.006: INFO: Waiting up to 5m0s for pod "var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2" in namespace "var-expansion-1708" to be "Succeeded or Failed"

    Sep 15 21:08:44.010: INFO: Pod "var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.592713ms
    Sep 15 21:08:46.014: INFO: Pod "var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007926414s
    STEP: Saw pod success
    Sep 15 21:08:46.014: INFO: Pod "var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2" satisfied condition "Succeeded or Failed"

    Sep 15 21:08:46.017: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2 container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 21:08:46.028: INFO: Waiting for pod var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2 to disappear
    Sep 15 21:08:46.031: INFO: Pod var-expansion-9ef8f5bd-302d-4acc-8637-b2b7767dbfe2 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:46.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-1708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1196,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:08:47.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7680" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1231,"failed":8,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 5 lines ...
    [It] should call prestop when killing a pod  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating server pod server in namespace prestop-6894
    STEP: Waiting for pods to come up.
    STEP: Creating tester pod tester in namespace prestop-6894
    STEP: Deleting pre-stop pod
    STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)

    STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)

    Sep 15 21:10:39.487: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc0002b8290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 21:10:39.487: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc0002b8290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":52,"skipped":884,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:10:39.515: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 222 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep 15 21:11:43.596: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc0002b8290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 21:11:43.596: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc0002b8290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":52,"skipped":884,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:11:43.615: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename prestop
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 222 lines ...
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
    		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
    	],
    	"StillContactingPeers": true
    }
    Sep 15 21:12:47.701: FAIL: validating pre-stop.

    Unexpected error:

        <*errors.errorString | 0xc0002b8290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 21 lines ...
    [sig-node] PreStop
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
      should call prestop when killing a pod  [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep 15 21:12:47.701: validating pre-stop.
      Unexpected error:

          <*errors.errorString | 0xc0002b8290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
    ------------------------------
    {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":52,"skipped":884,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:12:50.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9236" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":53,"skipped":907,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:13:07.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":54,"skipped":910,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
    Sep 15 21:13:37.984: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep 15 21:13:39.979: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep 15 21:13:39.984: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep 15 21:13:41.979: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep 15 21:13:41.983: INFO: Pod pod-with-prestop-exec-hook no longer exists
    STEP: check prestop hook
    Sep 15 21:14:11.984: FAIL: Timed out after 30.000s.

    Expected
        <*errors.errorString | 0xc005873620>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2022/09/15 21:13:08 Started HTTP server on port 8080\\n2022/09/15 21:13:08 Started UDP server on port  8081\\n\"",

        }
    to be nil
    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc001c27c00)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 21 lines ...
        should execute prestop exec hook properly [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 15 21:14:11.984: Timed out after 30.000s.
        Expected
            <*errors.errorString | 0xc005873620>: {
                s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2022/09/15 21:13:08 Started HTTP server on port 8080\\n2022/09/15 21:13:08 Started UDP server on port  8081\\n\"",

            }
        to be nil
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
... skipping 136 lines ...
    Sep 15 21:09:11.338: INFO: ss-2  k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-sdr8f  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 21:08:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 21:08:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-15 21:08:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-15 21:08:31 +0000 UTC  }]
    Sep 15 21:09:11.338: INFO: 
    Sep 15 21:09:11.338: INFO: StatefulSet ss has not reached scale 0, at 3
    STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8684
    Sep 15 21:09:12.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:09:12.474: INFO: rc: 1
    Sep 15 21:09:12.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("webserver")

    
    error:

    exit status 1
    Sep 15 21:09:22.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:09:22.563: INFO: rc: 1
    Sep 15 21:09:22.563: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:09:32.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:09:32.655: INFO: rc: 1
    Sep 15 21:09:32.655: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:09:42.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:09:42.748: INFO: rc: 1
    Sep 15 21:09:42.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:09:52.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:09:52.842: INFO: rc: 1
    Sep 15 21:09:52.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:10:02.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:10:02.932: INFO: rc: 1
    Sep 15 21:10:02.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:10:12.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:10:13.018: INFO: rc: 1
    Sep 15 21:10:13.019: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:10:23.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:10:23.115: INFO: rc: 1
    Sep 15 21:10:23.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:10:33.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:10:33.462: INFO: rc: 1
    Sep 15 21:10:33.462: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:10:43.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:10:43.554: INFO: rc: 1
    Sep 15 21:10:43.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:10:53.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:10:53.645: INFO: rc: 1
    Sep 15 21:10:53.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:11:03.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:11:03.738: INFO: rc: 1
    Sep 15 21:11:03.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:11:13.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:11:13.838: INFO: rc: 1
    Sep 15 21:11:13.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:11:23.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:11:23.939: INFO: rc: 1
    Sep 15 21:11:23.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:11:33.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:11:34.031: INFO: rc: 1
    Sep 15 21:11:34.031: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:11:44.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:11:44.136: INFO: rc: 1
    Sep 15 21:11:44.137: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:11:54.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:11:54.227: INFO: rc: 1
    Sep 15 21:11:54.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:12:04.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:12:04.321: INFO: rc: 1
    Sep 15 21:12:04.321: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:12:14.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:12:14.423: INFO: rc: 1
    Sep 15 21:12:14.423: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:12:24.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:12:24.515: INFO: rc: 1
    Sep 15 21:12:24.515: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:12:34.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:12:34.606: INFO: rc: 1
    Sep 15 21:12:34.606: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:12:44.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:12:44.697: INFO: rc: 1
    Sep 15 21:12:44.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:12:54.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:12:54.791: INFO: rc: 1
    Sep 15 21:12:54.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:13:04.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:13:04.892: INFO: rc: 1
    Sep 15 21:13:04.892: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:13:14.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:13:14.995: INFO: rc: 1
    Sep 15 21:13:14.995: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:13:24.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:13:25.087: INFO: rc: 1
    Sep 15 21:13:25.087: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:13:35.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:13:35.175: INFO: rc: 1
    Sep 15 21:13:35.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:13:45.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:13:45.265: INFO: rc: 1
    Sep 15 21:13:45.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:13:55.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:13:55.377: INFO: rc: 1
    Sep 15 21:13:55.377: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:14:05.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:14:05.478: INFO: rc: 1
    Sep 15 21:14:05.478: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-0" not found

    
    error:

    exit status 1
    Sep 15 21:14:15.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8684 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep 15 21:14:15.574: INFO: rc: 1
    Sep 15 21:14:15.575: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
    Sep 15 21:14:15.575: INFO: Scaling statefulset ss to 0
    Sep 15 21:14:15.592: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 14 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":108,"skipped":1823,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":913,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:14:11.995: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-lifecycle-hook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 38 lines ...
    Sep 15 21:14:42.072: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep 15 21:14:44.068: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep 15 21:14:44.073: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep 15 21:14:46.069: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep 15 21:14:46.072: INFO: Pod pod-with-prestop-exec-hook no longer exists
    STEP: check prestop hook
    Sep 15 21:15:16.074: FAIL: Timed out after 30.001s.

    Expected
        <*errors.errorString | 0xc000a076e0>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2022/09/15 21:14:12 Started HTTP server on port 8080\\n2022/09/15 21:14:12 Started UDP server on port  8081\\n\"",

        }
    to be nil
    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc00634b800)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 21 lines ...
        should execute prestop exec hook properly [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 15 21:15:16.074: Timed out after 30.001s.
        Expected
            <*errors.errorString | 0xc000a076e0>: {
                s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2022/09/15 21:14:12 Started HTTP server on port 8080\\n2022/09/15 21:14:12 Started UDP server on port  8081\\n\"",

            }
        to be nil
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":33,"skipped":510,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:07:52.802: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
    Sep 15 21:07:58.496: INFO: stderr: ""
    Sep 15 21:07:58.496: INFO: stdout: "true"
    Sep 15 21:07:58.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 get pods update-demo-nautilus-8vrzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 15 21:07:58.601: INFO: stderr: ""
    Sep 15 21:07:58.601: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 15 21:07:58.601: INFO: validating pod update-demo-nautilus-8vrzr
    Sep 15 21:11:32.735: INFO: update-demo-nautilus-8vrzr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-8vrzr)

    Sep 15 21:11:37.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep 15 21:11:37.831: INFO: stderr: ""
    Sep 15 21:11:37.831: INFO: stdout: "update-demo-nautilus-8vrzr update-demo-nautilus-xqms9 "
    Sep 15 21:11:37.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 get pods update-demo-nautilus-8vrzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep 15 21:11:37.930: INFO: stderr: ""
    Sep 15 21:11:37.930: INFO: stdout: "true"
    Sep 15 21:11:37.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 get pods update-demo-nautilus-8vrzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep 15 21:11:38.023: INFO: stderr: ""
    Sep 15 21:11:38.023: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep 15 21:11:38.023: INFO: validating pod update-demo-nautilus-8vrzr
    Sep 15 21:15:11.871: INFO: update-demo-nautilus-8vrzr is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-8vrzr)

    Sep 15 21:15:16.872: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311 +0x29b
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000dfa480)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 28 lines ...
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 15 21:15:16.872: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:311
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":33,"skipped":510,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:15:17.233: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 58 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:24.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3577" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":34,"skipped":510,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:24.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-4112" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":35,"skipped":521,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:15:24.792: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-6524a2fe-d478-4e69-a65b-dfde51c17ba7
    STEP: Creating a pod to test consume configMaps
    Sep 15 21:15:24.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124" in namespace "configmap-3340" to be "Succeeded or Failed"

    Sep 15 21:15:24.839: INFO: Pod "pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124": Phase="Pending", Reason="", readiness=false. Elapsed: 5.828264ms
    Sep 15 21:15:26.844: INFO: Pod "pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010783015s
    STEP: Saw pod success
    Sep 15 21:15:26.844: INFO: Pod "pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124" satisfied condition "Succeeded or Failed"

    Sep 15 21:15:26.848: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124 container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 21:15:26.862: INFO: Waiting for pod pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124 to disappear
    Sep 15 21:15:26.867: INFO: Pod pod-configmaps-8ac72dd2-786c-416b-849f-a940f1094124 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:26.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3340" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":525,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:15:26.885: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep 15 21:15:26.924: INFO: Waiting up to 5m0s for pod "downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee" in namespace "downward-api-3184" to be "Succeeded or Failed"

    Sep 15 21:15:26.929: INFO: Pod "downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.430595ms
    Sep 15 21:15:28.934: INFO: Pod "downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008235555s
    STEP: Saw pod success
    Sep 15 21:15:28.934: INFO: Pod "downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee" satisfied condition "Succeeded or Failed"

    Sep 15 21:15:28.937: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee container dapi-container: <nil>
    STEP: delete the pod
    Sep 15 21:15:28.953: INFO: Waiting for pod downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee to disappear
    Sep 15 21:15:28.956: INFO: Pod downward-api-c30d7f67-51d6-496e-aefe-4b925db7f3ee no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:28.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3184" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":528,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-6290" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":38,"skipped":532,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:15:29.119: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-b758da27-94bd-4e1f-8345-da2ecf15eadd
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:29.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8998" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":39,"skipped":534,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:32.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7975" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":40,"skipped":539,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:38.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-838" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":543,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:51.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2015" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":42,"skipped":558,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:15:51.563: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep 15 21:15:51.601: INFO: Waiting up to 5m0s for pod "security-context-e381fc70-d9cd-477e-b661-12e66eef1c68" in namespace "security-context-7020" to be "Succeeded or Failed"

    Sep 15 21:15:51.604: INFO: Pod "security-context-e381fc70-d9cd-477e-b661-12e66eef1c68": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304721ms
    Sep 15 21:15:53.608: INFO: Pod "security-context-e381fc70-d9cd-477e-b661-12e66eef1c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006855976s
    STEP: Saw pod success
    Sep 15 21:15:53.608: INFO: Pod "security-context-e381fc70-d9cd-477e-b661-12e66eef1c68" satisfied condition "Succeeded or Failed"

    Sep 15 21:15:53.611: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod security-context-e381fc70-d9cd-477e-b661-12e66eef1c68 container test-container: <nil>
    STEP: delete the pod
    Sep 15 21:15:53.624: INFO: Waiting for pod security-context-e381fc70-d9cd-477e-b661-12e66eef1c68 to disappear
    Sep 15 21:15:53.628: INFO: Pod security-context-e381fc70-d9cd-477e-b661-12e66eef1c68 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:15:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-7020" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":565,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":913,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:15:16.085: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-lifecycle-hook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
    Sep 15 21:15:36.176: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep 15 21:15:38.171: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep 15 21:15:38.175: INFO: Pod pod-with-prestop-exec-hook still exists
    Sep 15 21:15:40.171: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
    Sep 15 21:15:40.174: INFO: Pod pod-with-prestop-exec-hook no longer exists
    STEP: check prestop hook
    Sep 15 21:16:10.175: FAIL: Timed out after 30.000s.

    Expected
        <*errors.errorString | 0xc002524e10>: {
            s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2022/09/15 21:15:16 Started HTTP server on port 8080\\n2022/09/15 21:15:16 Started UDP server on port  8081\\n\"",

        }
    to be nil
    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/common/node.glob..func11.1.2(0xc0011b2c00)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79 +0x342
... skipping 21 lines ...
        should execute prestop exec hook properly [NodeConformance] [Conformance] [It]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
        Sep 15 21:16:10.175: Timed out after 30.000s.
        Expected
            <*errors.errorString | 0xc002524e10>: {
                s: "failed to match regexp \"GET /echo\\\\?msg=prestop\" in output \"2022/09/15 21:15:16 Started HTTP server on port 8080\\n2022/09/15 21:15:16 Started UDP server on port  8081\\n\"",

            }
        to be nil
    
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79
    ------------------------------
    {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":913,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 336 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:25.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-4757" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":55,"skipped":954,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:16:25.950: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-413999bc-397f-4a27-995e-1f3ee8de24eb
    STEP: Creating a pod to test consume secrets
    Sep 15 21:16:25.989: INFO: Waiting up to 5m0s for pod "pod-secrets-ce49a595-9569-4068-ae37-7516261808bd" in namespace "secrets-1179" to be "Succeeded or Failed"

    Sep 15 21:16:25.992: INFO: Pod "pod-secrets-ce49a595-9569-4068-ae37-7516261808bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012134ms
    Sep 15 21:16:27.996: INFO: Pod "pod-secrets-ce49a595-9569-4068-ae37-7516261808bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007300659s
    STEP: Saw pod success
    Sep 15 21:16:27.996: INFO: Pod "pod-secrets-ce49a595-9569-4068-ae37-7516261808bd" satisfied condition "Succeeded or Failed"

    Sep 15 21:16:27.999: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-md-0-wgrwb-695c7f45fb-57lx4 pod pod-secrets-ce49a595-9569-4068-ae37-7516261808bd container secret-volume-test: <nil>
    STEP: delete the pod
    Sep 15 21:16:28.019: INFO: Waiting for pod pod-secrets-ce49a595-9569-4068-ae37-7516261808bd to disappear
    Sep 15 21:16:28.021: INFO: Pod pod-secrets-ce49a595-9569-4068-ae37-7516261808bd no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:28.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1179" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":955,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:33.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-1538" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":57,"skipped":964,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:142.408 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":109,"skipped":1834,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:38.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-1128" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":110,"skipped":1851,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:16:38.303: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep 15 21:16:38.352: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-2dc082bc-97b3-4423-8adb-10119df59870" in namespace "security-context-test-3664" to be "Succeeded or Failed"

    Sep 15 21:16:38.362: INFO: Pod "alpine-nnp-false-2dc082bc-97b3-4423-8adb-10119df59870": Phase="Pending", Reason="", readiness=false. Elapsed: 9.39273ms
    Sep 15 21:16:40.367: INFO: Pod "alpine-nnp-false-2dc082bc-97b3-4423-8adb-10119df59870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014570009s
    Sep 15 21:16:42.372: INFO: Pod "alpine-nnp-false-2dc082bc-97b3-4423-8adb-10119df59870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019410672s
    Sep 15 21:16:42.372: INFO: Pod "alpine-nnp-false-2dc082bc-97b3-4423-8adb-10119df59870" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:42.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-3664" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":111,"skipped":1859,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:42.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4925" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":112,"skipped":1877,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:48.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3623" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":113,"skipped":1887,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 158 lines ...
    Sep 15 21:16:34.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8820 create -f -'
    Sep 15 21:16:35.110: INFO: stderr: ""
    Sep 15 21:16:35.110: INFO: stdout: "deployment.apps/agnhost-replica created\n"
    STEP: validating guestbook app
    Sep 15 21:16:35.110: INFO: Waiting for all frontend pods to be Running.
    Sep 15 21:16:40.162: INFO: Waiting for frontend to serve content.
    Sep 15 21:16:45.175: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 

    Sep 15 21:16:50.185: INFO: Trying to add a new entry to the guestbook.
    Sep 15 21:16:50.195: INFO: Verifying that added entry can be retrieved.
    Sep 15 21:16:50.206: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}

    STEP: using delete to clean up resources
    Sep 15 21:16:55.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8820 delete --grace-period=0 --force -f -'
    Sep 15 21:16:55.325: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
    Sep 15 21:16:55.325: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
    STEP: using delete to clean up resources
    Sep 15 21:16:55.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8820 delete --grace-period=0 --force -f -'
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:16:56.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-8820" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":58,"skipped":1044,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:04.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3084" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":114,"skipped":1899,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:12.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-336" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1108,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:12.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4869" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":115,"skipped":1902,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:17:12.270: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-ac04b345-552e-4710-9b46-750eb76ce553
    STEP: Creating a pod to test consume configMaps
    Sep 15 21:17:12.316: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d" in namespace "projected-4841" to be "Succeeded or Failed"

    Sep 15 21:17:12.319: INFO: Pod "pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.252082ms
    Sep 15 21:17:14.325: INFO: Pod "pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00902527s
    STEP: Saw pod success
    Sep 15 21:17:14.325: INFO: Pod "pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d" satisfied condition "Succeeded or Failed"

    Sep 15 21:17:14.329: INFO: Trying to get logs from node k8s-upgrade-and-conformance-soloe4-worker-3bhzw2 pod pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d container agnhost-container: <nil>
    STEP: delete the pod
    Sep 15 21:17:14.347: INFO: Waiting for pod pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d to disappear
    Sep 15 21:17:14.350: INFO: Pod pod-projected-configmaps-40d1847b-67ae-48f7-80d3-70e854cd841d no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:14.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4841" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1112,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:18.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3709" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1120,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep 15 21:17:18.464: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:19.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6359" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":62,"skipped":1120,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:24.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-5349" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":63,"skipped":1131,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:31.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-631" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":64,"skipped":1173,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep 15 21:17:37.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1788 explain e2e-test-crd-publish-openapi-6450-crds.spec'
    Sep 15 21:17:37.256: INFO: stderr: ""
    Sep 15 21:17:37.256: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6450-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep 15 21:17:37.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1788 explain e2e-test-crd-publish-openapi-6450-crds.spec.bars'
    Sep 15 21:17:37.485: INFO: stderr: ""
    Sep 15 21:17:37.485: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6450-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep 15 21:17:37.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1788 explain e2e-test-crd-publish-openapi-6450-crds.spec.bars2'
    Sep 15 21:17:37.735: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:40.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-1788" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":65,"skipped":1181,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:17:53.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-4860" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":66,"skipped":1202,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:18:15.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-7051" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":116,"skipped":1904,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep 15 21:18:17.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1320" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":117,"skipped":1907,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep 15 21:18:01.294: INFO: Unable to read jessie_udp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:01.297: INFO: Unable to read jessie_tcp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:01.301: INFO: Unable to read jessie_udp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:01.305: INFO: Unable to read jessie_tcp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:01.308: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:01.312: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:01.336: INFO: Lookups using dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7088 wheezy_tcp@dns-test-service.dns-7088 wheezy_udp@dns-test-service.dns-7088.svc wheezy_tcp@dns-test-service.dns-7088.svc wheezy_udp@_http._tcp.dns-test-service.dns-7088.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7088.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7088 jessie_tcp@dns-test-service.dns-7088 jessie_udp@dns-test-service.dns-7088.svc jessie_tcp@dns-test-service.dns-7088.svc jessie_udp@_http._tcp.dns-test-service.dns-7088.svc jessie_tcp@_http._tcp.dns-test-service.dns-7088.svc]

    
    Sep 15 21:18:06.343: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.346: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.350: INFO: Unable to read wheezy_udp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
... skipping 5 lines ...
    Sep 15 21:18:06.400: INFO: Unable to read jessie_udp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.403: INFO: Unable to read jessie_tcp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.407: INFO: Unable to read jessie_udp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.410: INFO: Unable to read jessie_tcp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.413: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.420: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:06.450: INFO: Lookups using dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7088 wheezy_tcp@dns-test-service.dns-7088 wheezy_udp@dns-test-service.dns-7088.svc wheezy_tcp@dns-test-service.dns-7088.svc wheezy_udp@_http._tcp.dns-test-service.dns-7088.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7088.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7088 jessie_tcp@dns-test-service.dns-7088 jessie_udp@dns-test-service.dns-7088.svc jessie_tcp@dns-test-service.dns-7088.svc jessie_udp@_http._tcp.dns-test-service.dns-7088.svc jessie_tcp@_http._tcp.dns-test-service.dns-7088.svc]

    
    Sep 15 21:18:11.342: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.346: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.349: INFO: Unable to read wheezy_udp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.352: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.356: INFO: Unable to read wheezy_udp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
... skipping 5 lines ...
    Sep 15 21:18:11.397: INFO: Unable to read jessie_udp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.400: INFO: Unable to read jessie_tcp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.403: INFO: Unable to read jessie_udp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.407: INFO: Unable to read jessie_tcp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.410: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.414: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:11.440: INFO: Lookups using dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7088 wheezy_tcp@dns-test-service.dns-7088 wheezy_udp@dns-test-service.dns-7088.svc wheezy_tcp@dns-test-service.dns-7088.svc wheezy_udp@_http._tcp.dns-test-service.dns-7088.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7088.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7088 jessie_tcp@dns-test-service.dns-7088 jessie_udp@dns-test-service.dns-7088.svc jessie_tcp@dns-test-service.dns-7088.svc jessie_udp@_http._tcp.dns-test-service.dns-7088.svc jessie_tcp@_http._tcp.dns-test-service.dns-7088.svc]

    
    Sep 15 21:18:16.342: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:16.346: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:16.350: INFO: Unable to read wheezy_udp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:16.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7088 from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)
    Sep 15 21:18:16.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-7088.svc from pod dns-7088/dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e: the server could not find the requested resource (get pods dns-test-ff9da3fa-8d2f-4c12-8ecd-3674d7d8148e)