This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 7 succeeded
Started2022-09-09 20:27
Elapsed1h6m
Revisionmain

No Test Failures!


Show 7 Passed Tests

Show 20 Skipped Tests

Error lines from build-log.txt

... skipping 903 lines ...
Status: Downloaded newer image for quay.io/jetstack/cert-manager-controller:v1.9.1
quay.io/jetstack/cert-manager-controller:v1.9.1
+ export GINKGO_NODES=3
+ GINKGO_NODES=3
+ export GINKGO_NOCOLOR=true
+ GINKGO_NOCOLOR=true
+ export GINKGO_ARGS=--fail-fast
+ GINKGO_ARGS=--fail-fast
+ export E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ E2E_CONF_FILE=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml
+ export ARTIFACTS=/logs/artifacts
+ ARTIFACTS=/logs/artifacts
+ export SKIP_RESOURCE_CLEANUP=false
+ SKIP_RESOURCE_CLEANUP=false
... skipping 79 lines ...
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-topology.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition --load-restrictor LoadRestrictionsNone > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/infrastructure-docker/v1beta1/cluster-template-ignition.yaml
mkdir -p /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/kustomize-v4.5.2 build /home/prow/go/src/sigs.k8s.io/cluster-api/test/extension/config/default > /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/data/test-extension/deployment.yaml
/home/prow/go/src/sigs.k8s.io/cluster-api/hack/tools/bin/ginkgo-v2.1.4 -v --trace --tags=e2e --focus="\[K8s-Upgrade\]"  --nodes=3 --no-color=true --output-dir="/logs/artifacts" --junit-report="junit.e2e_suite.1.xml" --fail-fast . -- \
    -e2e.artifacts-folder="/logs/artifacts" \
    -e2e.config="/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml" \
    -e2e.skip-resource-cleanup=false -e2e.use-existing-cluster=false
go: downloading k8s.io/apimachinery v0.24.2
go: downloading github.com/blang/semver v3.5.1+incompatible
go: downloading k8s.io/api v0.24.2
... skipping 229 lines ...
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-b2vx3j-mp-0-config created
    kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-b2vx3j-mp-0-config-cgroupfs created
    cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-b2vx3j created
    machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-b2vx3j-mp-0 created
    dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-b2vx3j-dmp-0 created

    Failed to get logs for Machine k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf, Cluster k8s-upgrade-and-conformance-6xwdmz/k8s-upgrade-and-conformance-b2vx3j: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7, Cluster k8s-upgrade-and-conformance-6xwdmz/k8s-upgrade-and-conformance-b2vx3j: exit status 2
    Failed to get logs for Machine k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth, Cluster k8s-upgrade-and-conformance-6xwdmz/k8s-upgrade-and-conformance-b2vx3j: exit status 2
    Failed to get logs for MachinePool k8s-upgrade-and-conformance-b2vx3j-mp-0, Cluster k8s-upgrade-and-conformance-6xwdmz/k8s-upgrade-and-conformance-b2vx3j: exit status 2
  << End Captured StdOut/StdErr Output

  Begin Captured GinkgoWriter Output >>
    STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec 09/09/22 20:36:29.122
    INFO: Creating namespace k8s-upgrade-and-conformance-6xwdmz
    INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-6xwdmz"
... skipping 41 lines ...
    
    Running in parallel across 4 nodes
    
    Sep  9 20:45:46.626: INFO: >>> kubeConfig: /tmp/kubeconfig
    Sep  9 20:45:46.629: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
    Sep  9 20:45:46.648: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
    Sep  9 20:45:46.711: INFO: The status of Pod coredns-558bd4d5db-gqq78 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:46.712: INFO: The status of Pod kindnet-6qb69 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:46.712: INFO: The status of Pod kindnet-x8kt7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:46.712: INFO: The status of Pod kube-proxy-jdp72 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:46.712: INFO: The status of Pod kube-proxy-mgdr5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:46.712: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
    Sep  9 20:45:46.712: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  9 20:45:46.712: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  9 20:45:46.712: INFO: coredns-558bd4d5db-gqq78  k8s-upgrade-and-conformance-b2vx3j-worker-urtz6c  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:44:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:44:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:44:07 +0000 UTC  }]
    Sep  9 20:45:46.712: INFO: kindnet-6qb69             k8s-upgrade-and-conformance-b2vx3j-worker-urtz6c  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:34 +0000 UTC  }]
    Sep  9 20:45:46.712: INFO: kindnet-x8kt7             k8s-upgrade-and-conformance-b2vx3j-worker-4y59ry  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:17 +0000 UTC  }]
    Sep  9 20:45:46.712: INFO: kube-proxy-jdp72          k8s-upgrade-and-conformance-b2vx3j-worker-urtz6c  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:42 +0000 UTC  }]
    Sep  9 20:45:46.712: INFO: kube-proxy-mgdr5          k8s-upgrade-and-conformance-b2vx3j-worker-4y59ry  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:34 +0000 UTC  }]
    Sep  9 20:45:46.712: INFO: 
    Sep  9 20:45:48.739: INFO: The status of Pod coredns-558bd4d5db-gqq78 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:48.739: INFO: The status of Pod kindnet-6qb69 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:48.739: INFO: The status of Pod kindnet-x8kt7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:48.739: INFO: The status of Pod kube-proxy-jdp72 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:48.739: INFO: The status of Pod kube-proxy-mgdr5 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:48.739: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
    Sep  9 20:45:48.739: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  9 20:45:48.739: INFO: POD                       NODE                                              PHASE    GRACE  CONDITIONS
    Sep  9 20:45:48.739: INFO: coredns-558bd4d5db-gqq78  k8s-upgrade-and-conformance-b2vx3j-worker-urtz6c  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:44:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:44:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:44:07 +0000 UTC  }]
    Sep  9 20:45:48.739: INFO: kindnet-6qb69             k8s-upgrade-and-conformance-b2vx3j-worker-urtz6c  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:34 +0000 UTC  }]
    Sep  9 20:45:48.739: INFO: kindnet-x8kt7             k8s-upgrade-and-conformance-b2vx3j-worker-4y59ry  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:38:17 +0000 UTC  }]
    Sep  9 20:45:48.739: INFO: kube-proxy-jdp72          k8s-upgrade-and-conformance-b2vx3j-worker-urtz6c  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:42 +0000 UTC  }]
    Sep  9 20:45:48.739: INFO: kube-proxy-mgdr5          k8s-upgrade-and-conformance-b2vx3j-worker-4y59ry  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:43:34 +0000 UTC  }]
    Sep  9 20:45:48.739: INFO: 
    Sep  9 20:45:50.735: INFO: The status of Pod coredns-558bd4d5db-t9bks is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:50.735: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
    Sep  9 20:45:50.735: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  9 20:45:50.735: INFO: POD                       NODE                                                           PHASE    GRACE  CONDITIONS
    Sep  9 20:45:50.735: INFO: coredns-558bd4d5db-t9bks  k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC  }]
    Sep  9 20:45:50.735: INFO: 
    Sep  9 20:45:52.736: INFO: The status of Pod coredns-558bd4d5db-t9bks is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

    Sep  9 20:45:52.736: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
    Sep  9 20:45:52.736: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready.
    Sep  9 20:45:52.736: INFO: POD                       NODE                                                           PHASE    GRACE  CONDITIONS
    Sep  9 20:45:52.736: INFO: coredns-558bd4d5db-t9bks  k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 20:45:50 +0000 UTC  }]
    Sep  9 20:45:52.736: INFO: 
    Sep  9 20:45:54.732: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
... skipping 44 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:00.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9960" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:00.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-4909" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] server version
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:01.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "server-version-1253" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    Sep  9 20:45:54.830: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-b10986a6-53f5-4f01-967b-9465ea616442
    STEP: Creating a pod to test consume secrets
    Sep  9 20:45:54.852: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50" in namespace "projected-763" to be "Succeeded or Failed"

    Sep  9 20:45:54.864: INFO: Pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50": Phase="Pending", Reason="", readiness=false. Elapsed: 11.222567ms
    Sep  9 20:45:56.870: INFO: Pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017848844s
    Sep  9 20:45:58.878: INFO: Pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025806216s
    Sep  9 20:46:00.887: INFO: Pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50": Phase="Running", Reason="", readiness=true. Elapsed: 6.03469624s
    Sep  9 20:46:02.893: INFO: Pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040619233s
    STEP: Saw pod success
    Sep  9 20:46:02.893: INFO: Pod "pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:02.899: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:46:02.934: INFO: Waiting for pod pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50 to disappear
    Sep  9 20:46:02.939: INFO: Pod pod-projected-secrets-5498afb4-8635-48e5-9b31-6905d7921e50 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:02.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-763" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:05.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3454" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:03.020: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-2860a64f-69cc-4790-ad0b-3480b11611fa
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:46:03.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7" in namespace "configmap-8496" to be "Succeeded or Failed"

    Sep  9 20:46:03.085: INFO: Pod "pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168544ms
    Sep  9 20:46:05.090: INFO: Pod "pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00939959s
    STEP: Saw pod success
    Sep  9 20:46:05.090: INFO: Pod "pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:05.094: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7 container configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:46:05.118: INFO: Waiting for pod pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7 to disappear
    Sep  9 20:46:05.123: INFO: Pod pod-configmaps-9e839b5c-3f2b-43a4-b399-3748bd21cfa7 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:05.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8496" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods Extended
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:05.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7525" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:07.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5987" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":104,"failed":0}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-4452-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":4,"skipped":63,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:05.234: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-680118e9-34ea-4251-ba43-0d59418a2a92
    STEP: Creating a pod to test consume secrets
    Sep  9 20:46:05.282: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce" in namespace "projected-2434" to be "Succeeded or Failed"

    Sep  9 20:46:05.285: INFO: Pod "pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100572ms
    Sep  9 20:46:07.290: INFO: Pod "pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008099125s
    Sep  9 20:46:09.297: INFO: Pod "pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014515683s
    STEP: Saw pod success
    Sep  9 20:46:09.297: INFO: Pod "pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:09.301: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:46:09.319: INFO: Waiting for pod pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce to disappear
    Sep  9 20:46:09.323: INFO: Pod pod-projected-secrets-995c54d5-4949-4959-a520-6f7dff7055ce no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:09.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2434" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:07.754: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  9 20:46:07.814: INFO: PodSpec: initContainers in spec.initContainers
    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:12.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-4776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":5,"skipped":101,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
    STEP: Destroying namespace "services-2343" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":6,"skipped":111,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:07.147: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  9 20:46:07.221: INFO: Waiting up to 5m0s for pod "pod-72478389-09c7-446a-9f14-24370f8b24fa" in namespace "emptydir-4521" to be "Succeeded or Failed"

    Sep  9 20:46:07.231: INFO: Pod "pod-72478389-09c7-446a-9f14-24370f8b24fa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.681187ms
    Sep  9 20:46:09.235: INFO: Pod "pod-72478389-09c7-446a-9f14-24370f8b24fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011842799s
    Sep  9 20:46:11.241: INFO: Pod "pod-72478389-09c7-446a-9f14-24370f8b24fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018104326s
    Sep  9 20:46:13.246: INFO: Pod "pod-72478389-09c7-446a-9f14-24370f8b24fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023059261s
    STEP: Saw pod success
    Sep  9 20:46:13.246: INFO: Pod "pod-72478389-09c7-446a-9f14-24370f8b24fa" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:13.250: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod pod-72478389-09c7-446a-9f14-24370f8b24fa container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:46:13.279: INFO: Waiting for pod pod-72478389-09c7-446a-9f14-24370f8b24fa to disappear
    Sep  9 20:46:13.282: INFO: Pod pod-72478389-09c7-446a-9f14-24370f8b24fa no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:13.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4521" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":119,"failed":0}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:09.439: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-7119a289-793f-4693-9ff3-7e6a048444b3
    STEP: Creating a pod to test consume secrets
    Sep  9 20:46:09.505: INFO: Waiting up to 5m0s for pod "pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421" in namespace "secrets-2596" to be "Succeeded or Failed"

    Sep  9 20:46:09.510: INFO: Pod "pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421": Phase="Pending", Reason="", readiness=false. Elapsed: 3.991936ms
    Sep  9 20:46:11.519: INFO: Pod "pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421": Phase="Running", Reason="", readiness=true. Elapsed: 2.013177213s
    Sep  9 20:46:13.524: INFO: Pod "pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018509828s
    STEP: Saw pod success
    Sep  9 20:46:13.524: INFO: Pod "pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:13.528: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421 container secret-env-test: <nil>
    STEP: delete the pod
    Sep  9 20:46:13.558: INFO: Waiting for pod pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421 to disappear
    Sep  9 20:46:13.564: INFO: Pod pod-secrets-18620e65-2bea-427a-908b-6e21f67ae421 no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:13.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2596" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":80,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:20.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-202" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":5,"skipped":107,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-c866b4ef-f5c8-4d47-8242-25919caeff39
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:46:20.770: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e" in namespace "projected-1381" to be "Succeeded or Failed"

    Sep  9 20:46:20.774: INFO: Pod "pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.456113ms
    Sep  9 20:46:22.780: INFO: Pod "pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008961763s
    STEP: Saw pod success
    Sep  9 20:46:22.780: INFO: Pod "pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:22.784: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:46:22.805: INFO: Waiting for pod pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e to disappear
    Sep  9 20:46:22.808: INFO: Pod pod-projected-configmaps-31e13295-b210-4fdc-b7ab-267995416a7e no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:22.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1381" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":113,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":3,"skipped":122,"failed":0}

    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:14.460: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename resourcequota
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:25.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-7757" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":4,"skipped":122,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-2595-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":139,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
    STEP: Destroying namespace "webhook-4643-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":8,"skipped":149,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":7,"skipped":148,"failed":0}

    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:21.057: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename subpath
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] Atomic writer volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-secret-qzk2
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  9 20:46:21.165: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qzk2" in namespace "subpath-5181" to be "Succeeded or Failed"

    Sep  9 20:46:21.172: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613925ms
    Sep  9 20:46:23.178: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 2.012458621s
    Sep  9 20:46:25.186: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 4.020786268s
    Sep  9 20:46:27.190: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 6.024701441s
    Sep  9 20:46:29.196: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 8.030163323s
    Sep  9 20:46:31.200: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 10.034448886s
    Sep  9 20:46:33.205: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 12.039418271s
    Sep  9 20:46:35.211: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 14.045537034s
    Sep  9 20:46:37.218: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 16.052341693s
    Sep  9 20:46:39.224: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 18.05797997s
    Sep  9 20:46:41.230: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Running", Reason="", readiness=true. Elapsed: 20.064012757s
    Sep  9 20:46:43.236: INFO: Pod "pod-subpath-test-secret-qzk2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.069895955s
    STEP: Saw pod success
    Sep  9 20:46:43.236: INFO: Pod "pod-subpath-test-secret-qzk2" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:43.239: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod pod-subpath-test-secret-qzk2 container test-container-subpath-secret-qzk2: <nil>
    STEP: delete the pod
    Sep  9 20:46:43.256: INFO: Waiting for pod pod-subpath-test-secret-qzk2 to disappear
    Sep  9 20:46:43.260: INFO: Pod pod-subpath-test-secret-qzk2 no longer exists
    STEP: Deleting pod pod-subpath-test-secret-qzk2
    Sep  9 20:46:43.260: INFO: Deleting pod "pod-subpath-test-secret-qzk2" in namespace "subpath-5181"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:43.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-5181" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":148,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:42.252: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name projected-secret-test-b87ad6a8-57fd-4de8-ab09-3bcd40f72216
    STEP: Creating a pod to test consume secrets
    Sep  9 20:46:42.333: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b" in namespace "projected-7165" to be "Succeeded or Failed"

    Sep  9 20:46:42.341: INFO: Pod "pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.002135ms
    Sep  9 20:46:44.345: INFO: Pod "pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011362388s
    STEP: Saw pod success
    Sep  9 20:46:44.345: INFO: Pod "pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:44.348: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:46:44.371: INFO: Waiting for pod pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b to disappear
    Sep  9 20:46:44.374: INFO: Pod pod-projected-secrets-0738ec39-59e3-4350-ba4d-423d098f246b no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:44.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7165" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":192,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:43.286: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-5f97b3e4-8754-4f3c-919d-caf3a7202148
    STEP: Creating a pod to test consume secrets
    Sep  9 20:46:43.340: INFO: Waiting up to 5m0s for pod "pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098" in namespace "secrets-1754" to be "Succeeded or Failed"

    Sep  9 20:46:43.345: INFO: Pod "pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531523ms
    Sep  9 20:46:45.349: INFO: Pod "pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009332182s
    STEP: Saw pod success
    Sep  9 20:46:45.349: INFO: Pod "pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:45.353: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:46:45.374: INFO: Waiting for pod pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098 to disappear
    Sep  9 20:46:45.377: INFO: Pod pod-secrets-55e9c5da-eb90-4bd6-aea1-8eb810051098 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 8 lines ...
    Sep  9 20:46:44.390: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  9 20:46:44.441: INFO: Waiting up to 5m0s for pod "pod-11488acc-05ea-4f44-baab-b7511b592cb5" in namespace "emptydir-2327" to be "Succeeded or Failed"

    Sep  9 20:46:44.445: INFO: Pod "pod-11488acc-05ea-4f44-baab-b7511b592cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627472ms
    Sep  9 20:46:46.449: INFO: Pod "pod-11488acc-05ea-4f44-baab-b7511b592cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007911514s
    STEP: Saw pod success
    Sep  9 20:46:46.449: INFO: Pod "pod-11488acc-05ea-4f44-baab-b7511b592cb5" satisfied condition "Succeeded or Failed"

    Sep  9 20:46:46.452: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-11488acc-05ea-4f44-baab-b7511b592cb5 container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:46:46.477: INFO: Waiting for pod pod-11488acc-05ea-4f44-baab-b7511b592cb5 to disappear
    Sep  9 20:46:46.481: INFO: Pod pod-11488acc-05ea-4f44-baab-b7511b592cb5 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:46.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2327" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":194,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:46:56.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-2855" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":235,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:01.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2568" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":254,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 110 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:15.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-9427" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":13,"skipped":263,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:47:15.501: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-e52c5ca2-fe18-4bf3-8c70-86f4a9d499fb
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:47:15.566: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e" in namespace "projected-311" to be "Succeeded or Failed"

    Sep  9 20:47:15.570: INFO: Pod "pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.216636ms
    Sep  9 20:47:17.574: INFO: Pod "pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007570643s
    STEP: Saw pod success
    Sep  9 20:47:17.574: INFO: Pod "pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e" satisfied condition "Succeeded or Failed"

    Sep  9 20:47:17.577: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e container projected-configmap-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:47:17.597: INFO: Waiting for pod pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e to disappear
    Sep  9 20:47:17.601: INFO: Pod pod-projected-configmaps-20c8696c-c44d-43a9-8a0f-eb6cba224c6e no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:17.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-311" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":316,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
    STEP: Destroying namespace "services-4446" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":15,"skipped":357,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:17.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-8395" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":136,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:40.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-3739" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":401,"failed":0}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:47:40.398: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  9 20:47:40.437: INFO: Waiting up to 5m0s for pod "pod-13f22bec-d87e-4497-97ff-dd53c124b717" in namespace "emptydir-4745" to be "Succeeded or Failed"

    Sep  9 20:47:40.440: INFO: Pod "pod-13f22bec-d87e-4497-97ff-dd53c124b717": Phase="Pending", Reason="", readiness=false. Elapsed: 3.308071ms
    Sep  9 20:47:42.445: INFO: Pod "pod-13f22bec-d87e-4497-97ff-dd53c124b717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008118877s
    STEP: Saw pod success
    Sep  9 20:47:42.445: INFO: Pod "pod-13f22bec-d87e-4497-97ff-dd53c124b717" satisfied condition "Succeeded or Failed"

    Sep  9 20:47:42.449: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod pod-13f22bec-d87e-4497-97ff-dd53c124b717 container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:47:42.468: INFO: Waiting for pod pod-13f22bec-d87e-4497-97ff-dd53c124b717 to disappear
    Sep  9 20:47:42.472: INFO: Pod pod-13f22bec-d87e-4497-97ff-dd53c124b717 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:42.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4745" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":411,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":153,"failed":0}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:46:45.391: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:45.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-9800" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":153,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    
    Sep  9 20:47:55.729: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
    &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-8811  e6391fe5-bfeb-40b7-a429-511545d45849 4304 3 2022-09-09 20:47:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7d29714d-b2d2-4d62-8052-72e27a5715df 0xc0030bf277 0xc0030bf278}] []  [{kube-controller-manager Update apps/v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d29714d-b2d2-4d62-8052-72e27a5715df\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0030bf2f8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
    Sep  9 20:47:55.730: INFO: All old ReplicaSets of Deployment "webserver-deployment":
    Sep  9 20:47:55.730: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-8811  ec9fd7cc-366f-4bcd-b543-d6f8ebbb1c07 4301 3 2022-09-09 20:47:45 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7d29714d-b2d2-4d62-8052-72e27a5715df 0xc0030bf357 0xc0030bf358}] []  [{kube-controller-manager Update apps/v1 2022-09-09 20:47:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d29714d-b2d2-4d62-8052-72e27a5715df\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0030bf3c8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
    Sep  9 20:47:55.754: INFO: Pod "webserver-deployment-795d758f88-74tgr" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-74tgr webserver-deployment-795d758f88- deployment-8811  9475a772-a859-4631-8036-2579386a147d 4299 0 2022-09-09 20:47:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc003619610 0xc003619611}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-09 20:47:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zszpg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zszpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.10,StartTime:2022-09-09 20:47:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  9 20:47:55.755: INFO: Pod "webserver-deployment-795d758f88-8fl4s" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-8fl4s webserver-deployment-795d758f88- deployment-8811  06438ea2-a0c8-4250-aa7b-0ca6303843f7 4267 0 2022-09-09 20:47:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc003619810 0xc003619811}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dv47v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dv47v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2022-09-09 20:47:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  9 20:47:55.755: INFO: Pod "webserver-deployment-795d758f88-mvf8g" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-mvf8g webserver-deployment-795d758f88- deployment-8811  fa739401-9feb-4126-a8e7-10868caae362 4288 0 2022-09-09 20:47:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc0036199e0 0xc0036199e1}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-09 20:47:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-878sn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-878sn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.20,StartTime:2022-09-09 20:47:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  9 20:47:55.755: INFO: Pod "webserver-deployment-795d758f88-n9pzj" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-n9pzj webserver-deployment-795d758f88- deployment-8811  57de9577-421b-4a64-a741-dc54c7516521 4296 0 2022-09-09 20:47:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc003619be0 0xc003619be1}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-09 20:47:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7fwh2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fwh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-worker-advsih,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.26,StartTime:2022-09-09 20:47:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  9 20:47:55.756: INFO: Pod "webserver-deployment-795d758f88-qgsdg" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-qgsdg webserver-deployment-795d758f88- deployment-8811  1caf577e-9bf1-4b04-82e3-53be929f3a2a 4321 0 2022-09-09 20:47:55 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc003619de0 0xc003619de1}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m44rj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m44rj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-worker-advsih,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  9 20:47:55.756: INFO: Pod "webserver-deployment-795d758f88-smx7w" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-smx7w webserver-deployment-795d758f88- deployment-8811  0f34d33d-8dd6-4898-a56a-5723c26709c8 4291 0 2022-09-09 20:47:53 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc003619f40 0xc003619f41}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-09 20:47:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b5sz5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5sz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.12,StartTime:2022-09-09 20:47:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

    Sep  9 20:47:55.756: INFO: Pod "webserver-deployment-795d758f88-sw4bf" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-sw4bf webserver-deployment-795d758f88- deployment-8811  aa3411fc-eea9-44a7-9248-d1122a59654f 4317 0 2022-09-09 20:47:55 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc004048140 0xc004048141}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-plqzs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plqzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  9 20:47:55.757: INFO: Pod "webserver-deployment-795d758f88-v5wrj" is not available:
    &Pod{ObjectMeta:{webserver-deployment-795d758f88-v5wrj webserver-deployment-795d758f88- deployment-8811  7dd4a588-9564-478e-a723-cc7f572aaebe 4324 0 2022-09-09 20:47:55 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 e6391fe5-bfeb-40b7-a429-511545d45849 0xc0040482a0 0xc0040482a1}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e6391fe5-bfeb-40b7-a429-511545d45849\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vmk98,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmk98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
    Sep  9 20:47:55.757: INFO: Pod "webserver-deployment-847dcfb7fb-2z69x" is available:
    &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2z69x webserver-deployment-847dcfb7fb- deployment-8811  a0607dfb-2c30-453f-ba6d-368a5b0305f2 4199 0 2022-09-09 20:47:45 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb ec9fd7cc-366f-4bcd-b543-d6f8ebbb1c07 0xc004048400 0xc004048401}] []  [{kube-controller-manager Update v1 2022-09-09 20:47:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ec9fd7cc-366f-4bcd-b543-d6f8ebbb1c07\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-09-09 20:47:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7hw4z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hw4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-09 20:47:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.9,StartTime:2022-09-09 20:47:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-09-09 20:47:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://f2c71cbdda8f2974cfb98f5171d784132dea62da494b71dcb0f0636cc0ef50f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:55.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-8811" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":11,"skipped":178,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:47:55.858: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test env composition
    Sep  9 20:47:55.949: INFO: Waiting up to 5m0s for pod "var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3" in namespace "var-expansion-9897" to be "Succeeded or Failed"

    Sep  9 20:47:55.953: INFO: Pod "var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962433ms
    Sep  9 20:47:57.958: INFO: Pod "var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008805875s
    Sep  9 20:47:59.964: INFO: Pod "var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014711399s
    STEP: Saw pod success
    Sep  9 20:47:59.964: INFO: Pod "var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3" satisfied condition "Succeeded or Failed"

    Sep  9 20:47:59.970: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3 container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 20:47:59.992: INFO: Waiting for pod var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3 to disappear
    Sep  9 20:47:59.998: INFO: Pod var-expansion-401b7a72-fdb0-48be-8f18-0843ac219ba3 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:47:59.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-9897" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":184,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 59 lines ...
    STEP: Destroying namespace "services-4848" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":138,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 20:48:03.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881" in namespace "downward-api-6548" to be "Succeeded or Failed"

    Sep  9 20:48:03.338: INFO: Pod "downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877401ms
    Sep  9 20:48:05.343: INFO: Pod "downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008931077s
    STEP: Saw pod success
    Sep  9 20:48:05.343: INFO: Pod "downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881" satisfied condition "Succeeded or Failed"

    Sep  9 20:48:05.347: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881 container client-container: <nil>
    STEP: delete the pod
    Sep  9 20:48:05.366: INFO: Waiting for pod downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881 to disappear
    Sep  9 20:48:05.370: INFO: Pod downwardapi-volume-7da9cf16-07de-4d51-93ce-84e94d842881 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:48:05.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6548" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":155,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:48:05.396: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-496fd140-b32f-4191-8f83-0b8226f66a5a
    STEP: Creating a pod to test consume secrets
    Sep  9 20:48:05.447: INFO: Waiting up to 5m0s for pod "pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173" in namespace "secrets-1605" to be "Succeeded or Failed"

    Sep  9 20:48:05.454: INFO: Pod "pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173": Phase="Pending", Reason="", readiness=false. Elapsed: 7.17937ms
    Sep  9 20:48:07.459: INFO: Pod "pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01200071s
    STEP: Saw pod success
    Sep  9 20:48:07.459: INFO: Pod "pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173" satisfied condition "Succeeded or Failed"

    Sep  9 20:48:07.463: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:48:07.482: INFO: Waiting for pod pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173 to disappear
    Sep  9 20:48:07.486: INFO: Pod pod-secrets-3de9b989-3e0e-415d-9bb9-646f612c4173 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:48:07.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1605" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":161,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:48:18.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9025" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":9,"skipped":167,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-680-crds.webhook.example.com via the AdmissionRegistration API
    Sep  9 20:47:56.572: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:48:06.685: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:48:16.791: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:48:26.887: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:48:36.899: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:48:36.899: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with pruning [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 20:48:36.900: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:48:48.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-1332" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":199,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:48:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-8455" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":11,"skipped":200,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Looking for a node to schedule stateful set and pod
    STEP: Creating pod with conflicting port in namespace statefulset-6122
    STEP: Creating statefulset with conflicting port in namespace statefulset-6122
    STEP: Waiting until pod test-pod will start running in namespace statefulset-6122
    STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6122
    Sep  9 20:48:54.212: INFO: Observed stateful pod in namespace: statefulset-6122, name: ss-0, uid: dc1e0b1a-a5a9-48f7-b380-89131480892a, status phase: Pending. Waiting for statefulset controller to delete.
    Sep  9 20:48:54.412: INFO: Observed stateful pod in namespace: statefulset-6122, name: ss-0, uid: dc1e0b1a-a5a9-48f7-b380-89131480892a, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  9 20:48:54.421: INFO: Observed stateful pod in namespace: statefulset-6122, name: ss-0, uid: dc1e0b1a-a5a9-48f7-b380-89131480892a, status phase: Failed. Waiting for statefulset controller to delete.

    Sep  9 20:48:54.426: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6122
    STEP: Removing pod with conflicting port in namespace statefulset-6122
    STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6122 and will be in running state
    [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
    Sep  9 20:48:58.456: INFO: Deleting all statefulset in ns statefulset-6122
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:49:08.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-6122" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":12,"skipped":205,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:49:12.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-2627" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":13,"skipped":211,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":17,"skipped":434,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:48:37.510: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9219-crds.webhook.example.com via the AdmissionRegistration API
    Sep  9 20:48:51.759: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:49:01.874: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:49:11.975: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:49:22.072: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:49:32.088: INFO: Waiting for webhook configuration to be ready...
    Sep  9 20:49:32.088: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource with pruning [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 20:49:32.088: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":17,"skipped":434,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:49:32.688: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
    STEP: Destroying namespace "webhook-7350-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":18,"skipped":434,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:49:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8982" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":19,"skipped":443,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:49:54.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9943" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":20,"skipped":445,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.784 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:50:07.958: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-96ae6b6a-656d-4d0c-9ea6-563acdec965d
    STEP: Creating a pod to test consume secrets
    Sep  9 20:50:08.076: INFO: Waiting up to 5m0s for pod "pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25" in namespace "secrets-6123" to be "Succeeded or Failed"

    Sep  9 20:50:08.091: INFO: Pod "pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.720491ms
    Sep  9 20:50:10.100: INFO: Pod "pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0233692s
    STEP: Saw pod success
    Sep  9 20:50:10.100: INFO: Pod "pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25" satisfied condition "Succeeded or Failed"

    Sep  9 20:50:10.108: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:50:10.174: INFO: Waiting for pod pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25 to disappear
    Sep  9 20:50:10.180: INFO: Pod pod-secrets-1e344752-aafd-45f5-8a77-69d17e39bc25 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:50:10.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-6123" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":42,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  9 20:49:59.099: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:49:59.106: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:49:59.127: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:49:59.135: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:49:59.143: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:49:59.152: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:49:59.167: INFO: Lookups using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local]

    
    Sep  9 20:50:04.175: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.183: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.192: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.200: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.224: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.230: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.237: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.246: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:04.264: INFO: Lookups using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local]

    
    Sep  9 20:50:09.180: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.186: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.194: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.203: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.234: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.241: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.249: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.256: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:09.278: INFO: Lookups using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local]

    
    Sep  9 20:50:14.175: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.184: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.195: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.205: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.239: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.250: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.256: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.263: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:14.277: INFO: Lookups using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local]

    
    Sep  9 20:50:19.175: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.181: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.192: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.201: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.221: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.229: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.236: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.242: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:19.255: INFO: Lookups using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local]

    
    Sep  9 20:50:24.176: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.183: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.190: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.209: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.230: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.240: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.253: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.259: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local from pod dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9: the server could not find the requested resource (get pods dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9)
    Sep  9 20:50:24.274: INFO: Lookups using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4203.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4203.svc.cluster.local jessie_udp@dns-test-service-2.dns-4203.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4203.svc.cluster.local]

    
    Sep  9 20:50:29.256: INFO: DNS probes using dns-4203/dns-test-435ac131-40cb-4f14-9a66-6fb4509eaae9 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:50:29.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-4203" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":21,"skipped":447,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:50:29.341: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-a3f52dff-9aa6-4708-9eec-ea72d6b1c876
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:50:29.457: INFO: Waiting up to 5m0s for pod "pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf" in namespace "configmap-6373" to be "Succeeded or Failed"

    Sep  9 20:50:29.465: INFO: Pod "pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.737719ms
    Sep  9 20:50:31.474: INFO: Pod "pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016402021s
    STEP: Saw pod success
    Sep  9 20:50:31.474: INFO: Pod "pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf" satisfied condition "Succeeded or Failed"

    Sep  9 20:50:31.481: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:50:31.529: INFO: Waiting for pod pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf to disappear
    Sep  9 20:50:31.537: INFO: Pod pod-configmaps-694e61fa-0b90-4f88-895e-0e0fbf9befaf no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:50:40.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-3888" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":5,"skipped":92,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:50:45.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-8398" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":6,"skipped":114,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:51:23.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-9791" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":7,"skipped":134,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:51:23.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-1544" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":8,"skipped":142,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 147 lines ...
    Sep  9 20:51:21.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1843 exec execpod-affinitywstv8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31315'
    Sep  9 20:51:23.971: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31315\nConnection to 172.18.0.4 31315 port [tcp/*] succeeded!\n"
    Sep  9 20:51:23.971: INFO: stdout: ""
    Sep  9 20:51:23.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1843 exec execpod-affinitywstv8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31315'
    Sep  9 20:51:26.320: INFO: stderr: "+ nc -v -t -w 2 172.18.0.4 31315\n+ echo hostName\nConnection to 172.18.0.4 31315 port [tcp/*] succeeded!\n"
    Sep  9 20:51:26.320: INFO: stdout: ""
    Sep  9 20:51:26.320: FAIL: Unexpected error:

        <*errors.errorString | 0xc003fa8640>: {
            s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.4:31315 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint 172.18.0.4:31315 over TCP protocol
    occurred
    
... skipping 25 lines ...
    • Failure [144.652 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 20:51:26.321: Unexpected error:

          <*errors.errorString | 0xc003fa8640>: {
              s: "service is not reachable within 2m0s timeout on endpoint 172.18.0.4:31315 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint 172.18.0.4:31315 over TCP protocol
      occurred
    
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-xmgz
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  9 20:51:23.786: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xmgz" in namespace "subpath-565" to be "Succeeded or Failed"

    Sep  9 20:51:23.794: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Pending", Reason="", readiness=false. Elapsed: 7.351096ms
    Sep  9 20:51:25.800: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 2.013391948s
    Sep  9 20:51:27.807: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 4.020237378s
    Sep  9 20:51:29.813: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 6.026632537s
    Sep  9 20:51:31.820: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 8.034020954s
    Sep  9 20:51:33.829: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 10.042207084s
    Sep  9 20:51:35.836: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 12.049718414s
    Sep  9 20:51:37.843: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 14.057101471s
    Sep  9 20:51:39.856: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 16.070164949s
    Sep  9 20:51:41.865: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 18.078782107s
    Sep  9 20:51:43.874: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Running", Reason="", readiness=true. Elapsed: 20.087785422s
    Sep  9 20:51:45.882: INFO: Pod "pod-subpath-test-configmap-xmgz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.095428938s
    STEP: Saw pod success
    Sep  9 20:51:45.882: INFO: Pod "pod-subpath-test-configmap-xmgz" satisfied condition "Succeeded or Failed"

    Sep  9 20:51:45.890: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod pod-subpath-test-configmap-xmgz container test-container-subpath-configmap-xmgz: <nil>
    STEP: delete the pod
    Sep  9 20:51:45.928: INFO: Waiting for pod pod-subpath-test-configmap-xmgz to disappear
    Sep  9 20:51:45.935: INFO: Pod pod-subpath-test-configmap-xmgz no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-xmgz
    Sep  9 20:51:45.935: INFO: Deleting pod "pod-subpath-test-configmap-xmgz" in namespace "subpath-565"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:51:45.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-565" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":150,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:52:46.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-3736" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":10,"skipped":152,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:52:46.204: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-runtime
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: create the container
    STEP: wait for the container to reach Failed

    STEP: get the container status
    STEP: the container should be terminated
    STEP: the termination message should be set
    Sep  9 20:52:48.294: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
    STEP: delete the container
    [AfterEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:52:48.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-4396" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":160,"failed":0}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:52:48.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-701" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":12,"skipped":177,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:52:48.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3476" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":181,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:52:55.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-1370" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":14,"skipped":187,"failed":0}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:300.115 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule jobs when suspended [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":13,"skipped":204,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 20:53:00.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c" in namespace "projected-9436" to be "Succeeded or Failed"

    Sep  9 20:53:00.347: INFO: Pod "downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.185878ms
    Sep  9 20:53:02.354: INFO: Pod "downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012638395s
    STEP: Saw pod success
    Sep  9 20:53:02.355: INFO: Pod "downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c" satisfied condition "Succeeded or Failed"

    Sep  9 20:53:02.361: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c container client-container: <nil>
    STEP: delete the pod
    Sep  9 20:53:02.409: INFO: Waiting for pod downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c to disappear
    Sep  9 20:53:02.415: INFO: Pod downwardapi-volume-88258884-bb05-41de-9108-1c561c622a6c no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:02.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9436" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":216,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 20:53:02.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e" in namespace "downward-api-3752" to be "Succeeded or Failed"

    Sep  9 20:53:02.540: INFO: Pod "downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.345261ms
    Sep  9 20:53:04.552: INFO: Pod "downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021145838s
    STEP: Saw pod success
    Sep  9 20:53:04.552: INFO: Pod "downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e" satisfied condition "Succeeded or Failed"

    Sep  9 20:53:04.558: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e container client-container: <nil>
    STEP: delete the pod
    Sep  9 20:53:04.609: INFO: Waiting for pod downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e to disappear
    Sep  9 20:53:04.616: INFO: Pod downwardapi-volume-107302cf-7116-46a6-933c-69ab3b33f64e no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:04.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-3752" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":223,"failed":0}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 43 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:06.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-4765" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":15,"skipped":196,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-3638" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":16,"skipped":212,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:08.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1299" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":16,"skipped":229,"failed":0}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-projected-all-test-volume-8e79d3f9-ebea-4de7-bb77-937e81de8ae5
    STEP: Creating secret with name secret-projected-all-test-volume-af0804e1-b472-4669-aa82-1b79231ddddf
    STEP: Creating a pod to test Check all projections for projected volume plugin
    Sep  9 20:53:06.687: INFO: Waiting up to 5m0s for pod "projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d" in namespace "projected-5427" to be "Succeeded or Failed"

    Sep  9 20:53:06.690: INFO: Pod "projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365137ms
    Sep  9 20:53:08.709: INFO: Pod "projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02251441s
    STEP: Saw pod success
    Sep  9 20:53:08.709: INFO: Pod "projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d" satisfied condition "Succeeded or Failed"

    Sep  9 20:53:08.724: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d container projected-all-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:53:08.782: INFO: Waiting for pod projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d to disappear
    Sep  9 20:53:08.790: INFO: Pod projected-volume-7a34254c-3b96-477a-b34e-37ae7a34af3d no longer exists
    [AfterEach] [sig-storage] Projected combined
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:08.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5427" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":231,"failed":0}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:53:08.812: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
    STEP: Destroying namespace "services-2916" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":231,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:28.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-7398" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":19,"skipped":259,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "webhook-2087-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":20,"skipped":271,"failed":0}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:53:32.828: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override all
    Sep  9 20:53:32.930: INFO: Waiting up to 5m0s for pod "client-containers-44e4b364-5e19-49d5-af7a-769f626d716b" in namespace "containers-6330" to be "Succeeded or Failed"

    Sep  9 20:53:32.941: INFO: Pod "client-containers-44e4b364-5e19-49d5-af7a-769f626d716b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67628ms
    Sep  9 20:53:34.949: INFO: Pod "client-containers-44e4b364-5e19-49d5-af7a-769f626d716b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018490294s
    STEP: Saw pod success
    Sep  9 20:53:34.949: INFO: Pod "client-containers-44e4b364-5e19-49d5-af7a-769f626d716b" satisfied condition "Succeeded or Failed"

    Sep  9 20:53:34.953: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod client-containers-44e4b364-5e19-49d5-af7a-769f626d716b container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:53:34.975: INFO: Waiting for pod client-containers-44e4b364-5e19-49d5-af7a-769f626d716b to disappear
    Sep  9 20:53:34.980: INFO: Pod client-containers-44e4b364-5e19-49d5-af7a-769f626d716b no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:34.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-6330" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":287,"failed":0}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:53:35.013: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create ConfigMap with empty key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap that has name configmap-test-emptyKey-b932c852-5210-47e3-9af1-6ba01e8b52f0
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:35.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-8885" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":22,"skipped":291,"failed":0}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:39.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1801" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":23,"skipped":298,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:45.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-7221" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":17,"skipped":243,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-2968-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":24,"skipped":327,"failed":0}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:53:45.762: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  9 20:53:45.841: INFO: Waiting up to 5m0s for pod "pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301" in namespace "emptydir-7816" to be "Succeeded or Failed"

    Sep  9 20:53:45.847: INFO: Pod "pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301": Phase="Pending", Reason="", readiness=false. Elapsed: 5.311674ms
    Sep  9 20:53:47.855: INFO: Pod "pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301": Phase="Running", Reason="", readiness=true. Elapsed: 2.013436052s
    Sep  9 20:53:49.861: INFO: Pod "pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019970003s
    STEP: Saw pod success
    Sep  9 20:53:49.861: INFO: Pod "pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301" satisfied condition "Succeeded or Failed"

    Sep  9 20:53:49.868: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301 container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:53:49.907: INFO: Waiting for pod pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301 to disappear
    Sep  9 20:53:49.913: INFO: Pod pod-f388b1fb-691d-4e8a-bc75-16c6b4eed301 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:49.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7816" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":245,"failed":0}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:52.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-8129" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":258,"failed":0}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 20:53:52.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf" in namespace "projected-5820" to be "Succeeded or Failed"

    Sep  9 20:53:52.743: INFO: Pod "downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577972ms
    Sep  9 20:53:54.751: INFO: Pod "downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014294354s
    STEP: Saw pod success
    Sep  9 20:53:54.752: INFO: Pod "downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf" satisfied condition "Succeeded or Failed"

    Sep  9 20:53:54.759: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf container client-container: <nil>
    STEP: delete the pod
    Sep  9 20:53:54.788: INFO: Waiting for pod downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf to disappear
    Sep  9 20:53:54.796: INFO: Pod downwardapi-volume-69c7326a-f1ac-4fe1-8e31-6d0f511cd1bf no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:53:54.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5820" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":260,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":220,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:51:37.324: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
    STEP: creating replication controller affinity-nodeport-timeout in namespace services-1799
    I0909 20:51:39.910771      15 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1799, replica count: 3
    I0909 20:51:42.962378      15 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  9 20:51:42.984: INFO: Creating new exec pod
    Sep  9 20:51:46.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:51:48.403: INFO: rc: 1
    Sep  9 20:51:48.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:51:49.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:51:51.730: INFO: rc: 1
    Sep  9 20:51:51.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:51:52.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:51:54.768: INFO: rc: 1
    Sep  9 20:51:54.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:51:55.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:51:57.730: INFO: rc: 1
    Sep  9 20:51:57.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:51:58.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:00.765: INFO: rc: 1
    Sep  9 20:52:00.765: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:01.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:03.768: INFO: rc: 1
    Sep  9 20:52:03.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:04.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:06.786: INFO: rc: 1
    Sep  9 20:52:06.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:07.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:09.760: INFO: rc: 1
    Sep  9 20:52:09.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:10.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:12.768: INFO: rc: 1
    Sep  9 20:52:12.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:13.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:15.747: INFO: rc: 1
    Sep  9 20:52:15.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:16.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:18.739: INFO: rc: 1
    Sep  9 20:52:18.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:19.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:21.715: INFO: rc: 1
    Sep  9 20:52:21.715: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:22.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:24.734: INFO: rc: 1
    Sep  9 20:52:24.735: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:25.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:27.757: INFO: rc: 1
    Sep  9 20:52:27.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:28.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:30.777: INFO: rc: 1
    Sep  9 20:52:30.777: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:31.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:33.740: INFO: rc: 1
    Sep  9 20:52:33.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:34.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:36.733: INFO: rc: 1
    Sep  9 20:52:36.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:37.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:39.727: INFO: rc: 1
    Sep  9 20:52:39.727: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:40.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:42.741: INFO: rc: 1
    Sep  9 20:52:42.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + + nc -vecho -t hostName -w
     2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:43.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:45.748: INFO: rc: 1
    Sep  9 20:52:45.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:46.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:48.726: INFO: rc: 1
    Sep  9 20:52:48.726: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:49.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:51.748: INFO: rc: 1
    Sep  9 20:52:51.748: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:52.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:54.753: INFO: rc: 1
    Sep  9 20:52:54.753: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo+  hostNamenc
     -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:55.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:52:57.781: INFO: rc: 1
    Sep  9 20:52:57.781: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:52:58.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:00.835: INFO: rc: 1
    Sep  9 20:53:00.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:01.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:03.768: INFO: rc: 1
    Sep  9 20:53:03.768: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + + ncecho -v hostName -t
     -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:04.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:06.809: INFO: rc: 1
    Sep  9 20:53:06.809: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:07.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:09.893: INFO: rc: 1
    Sep  9 20:53:09.893: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:10.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:12.744: INFO: rc: 1
    Sep  9 20:53:12.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    + echo hostName
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:13.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:15.747: INFO: rc: 1
    Sep  9 20:53:15.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:16.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:19.007: INFO: rc: 1
    Sep  9 20:53:19.007: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:19.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:21.774: INFO: rc: 1
    Sep  9 20:53:21.774: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:22.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:24.752: INFO: rc: 1
    Sep  9 20:53:24.752: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:25.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:27.744: INFO: rc: 1
    Sep  9 20:53:27.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:28.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:30.777: INFO: rc: 1
    Sep  9 20:53:30.778: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:31.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:33.869: INFO: rc: 1
    Sep  9 20:53:33.869: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    + echo hostName
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:34.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:36.786: INFO: rc: 1
    Sep  9 20:53:36.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:37.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:39.743: INFO: rc: 1
    Sep  9 20:53:39.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:40.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:42.840: INFO: rc: 1
    Sep  9 20:53:42.840: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + + echonc hostName
     -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:43.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:45.758: INFO: rc: 1
    Sep  9 20:53:45.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:46.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:48.821: INFO: rc: 1
    Sep  9 20:53:48.821: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:48.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
    Sep  9 20:53:51.183: INFO: rc: 1
    Sep  9 20:53:51.183: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1799 exec execpod-affinity2s6mk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-timeout 80
    nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 20:53:51.184: FAIL: Unexpected error:

        <*errors.errorString | 0xc002e98150>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol
    occurred
    
... skipping 25 lines ...
    • Failure [149.954 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 20:53:51.184: Unexpected error:

          <*errors.errorString | 0xc002e98150>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":450,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:50:31.559: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename container-probe
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
    • [SLOW TEST:243.187 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":450,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:54:38.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4507" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":24,"skipped":478,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    STEP: Destroying namespace "services-2523" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":25,"skipped":516,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 41 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:54:49.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-6262" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":26,"skipped":521,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 20:54:49.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a" in namespace "downward-api-4215" to be "Succeeded or Failed"

    Sep  9 20:54:49.224: INFO: Pod "downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.578413ms
    Sep  9 20:54:51.229: INFO: Pod "downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010465533s
    STEP: Saw pod success
    Sep  9 20:54:51.230: INFO: Pod "downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a" satisfied condition "Succeeded or Failed"

    Sep  9 20:54:51.234: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a container client-container: <nil>
    STEP: delete the pod
    Sep  9 20:54:51.255: INFO: Waiting for pod downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a to disappear
    Sep  9 20:54:51.260: INFO: Pod downwardapi-volume-2d39bed6-c80f-4c6c-bd45-df14ca91073a no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:54:51.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-4215" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":550,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
    [It] should contain environment variables for services [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 20:54:51.349: INFO: The status of Pod server-envvars-619a8117-321e-41e4-8228-e2eb9a49781e is Pending, waiting for it to be Running (with Ready = true)
    Sep  9 20:54:53.354: INFO: The status of Pod server-envvars-619a8117-321e-41e4-8228-e2eb9a49781e is Running (Ready = true)
    Sep  9 20:54:53.382: INFO: Waiting up to 5m0s for pod "client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4" in namespace "pods-7095" to be "Succeeded or Failed"

    Sep  9 20:54:53.392: INFO: Pod "client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.971086ms
    Sep  9 20:54:55.397: INFO: Pod "client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014926051s
    STEP: Saw pod success
    Sep  9 20:54:55.397: INFO: Pod "client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4" satisfied condition "Succeeded or Failed"

    Sep  9 20:54:55.401: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4 container env3cont: <nil>
    STEP: delete the pod
    Sep  9 20:54:55.424: INFO: Waiting for pod client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4 to disappear
    Sep  9 20:54:55.428: INFO: Pod client-envvars-23b5d360-2fbd-4127-84fb-49c112ea1ab4 no longer exists
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:54:55.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-7095" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":566,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:54:55.457: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via environment variable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-663/configmap-test-42d09fce-b5d5-4fb6-b916-3f3dae3a6e30
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:54:55.508: INFO: Waiting up to 5m0s for pod "pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6" in namespace "configmap-663" to be "Succeeded or Failed"

    Sep  9 20:54:55.512: INFO: Pod "pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.708663ms
    Sep  9 20:54:57.518: INFO: Pod "pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009352427s
    STEP: Saw pod success
    Sep  9 20:54:57.518: INFO: Pod "pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6" satisfied condition "Succeeded or Failed"

    Sep  9 20:54:57.522: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6 container env-test: <nil>
    STEP: delete the pod
    Sep  9 20:54:57.544: INFO: Waiting for pod pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6 to disappear
    Sep  9 20:54:57.548: INFO: Pod pod-configmaps-6246fa69-e4f3-4f94-bbb4-eac636da5cc6 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:54:57.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-663" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":577,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 191 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:06.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-8708" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":30,"skipped":600,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:25.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6235" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":31,"skipped":613,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:55:25.213: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-3e9e5fef-b10e-4fcb-a51d-7fd42e4c0795
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:55:25.265: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34" in namespace "projected-7074" to be "Succeeded or Failed"

    Sep  9 20:55:25.268: INFO: Pod "pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.103174ms
    Sep  9 20:55:27.273: INFO: Pod "pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008692459s
    STEP: Saw pod success
    Sep  9 20:55:27.273: INFO: Pod "pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34" satisfied condition "Succeeded or Failed"

    Sep  9 20:55:27.277: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:55:27.294: INFO: Waiting for pod pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34 to disappear
    Sep  9 20:55:27.298: INFO: Pod pod-projected-configmaps-674375fa-392b-4d4e-970b-243153306c34 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:27.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7074" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":616,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:31.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9784" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":636,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:55:31.522: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-f4f02bc1-5a08-429b-8cfc-d4dbe9d294a2
    STEP: Creating a pod to test consume secrets
    Sep  9 20:55:31.576: INFO: Waiting up to 5m0s for pod "pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9" in namespace "secrets-2060" to be "Succeeded or Failed"

    Sep  9 20:55:31.585: INFO: Pod "pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021696ms
    Sep  9 20:55:33.590: INFO: Pod "pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013746936s
    STEP: Saw pod success
    Sep  9 20:55:33.590: INFO: Pod "pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9" satisfied condition "Succeeded or Failed"

    Sep  9 20:55:33.593: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:55:33.611: INFO: Waiting for pod pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9 to disappear
    Sep  9 20:55:33.616: INFO: Pod pod-secrets-f23167b1-5d22-4f61-b372-c7fbe001b9a9 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:33.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-2060" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":642,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:33.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-4761" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":35,"skipped":662,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-configmap-qljv
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  9 20:55:33.783: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qljv" in namespace "subpath-7199" to be "Succeeded or Failed"

    Sep  9 20:55:33.787: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.668908ms
    Sep  9 20:55:35.792: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 2.008651394s
    Sep  9 20:55:37.798: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 4.014639908s
    Sep  9 20:55:39.803: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 6.019729841s
    Sep  9 20:55:41.808: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 8.025124624s
    Sep  9 20:55:43.814: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 10.030457363s
    Sep  9 20:55:45.819: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 12.035785505s
    Sep  9 20:55:47.824: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 14.040403081s
    Sep  9 20:55:49.829: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 16.045507211s
    Sep  9 20:55:51.835: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 18.051475998s
    Sep  9 20:55:53.840: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Running", Reason="", readiness=true. Elapsed: 20.056594326s
    Sep  9 20:55:55.845: INFO: Pod "pod-subpath-test-configmap-qljv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.061958053s
    STEP: Saw pod success
    Sep  9 20:55:55.845: INFO: Pod "pod-subpath-test-configmap-qljv" satisfied condition "Succeeded or Failed"

    Sep  9 20:55:55.849: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod pod-subpath-test-configmap-qljv container test-container-subpath-configmap-qljv: <nil>
    STEP: delete the pod
    Sep  9 20:55:55.868: INFO: Waiting for pod pod-subpath-test-configmap-qljv to disappear
    Sep  9 20:55:55.872: INFO: Pod pod-subpath-test-configmap-qljv no longer exists
    STEP: Deleting pod pod-subpath-test-configmap-qljv
    Sep  9 20:55:55.872: INFO: Deleting pod "pod-subpath-test-configmap-qljv" in namespace "subpath-7199"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:55:55.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-7199" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":664,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 59 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        should perform rolling updates and roll backs of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":25,"skipped":328,"failed":0}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:55:55.911: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 20:55:57.965: INFO: Deleting pod "var-expansion-8a3920d1-8775-49a4-929a-ae209701dadc" in namespace "var-expansion-1981"
    Sep  9 20:55:57.973: INFO: Wait up to 5m0s for pod "var-expansion-8a3920d1-8775-49a4-929a-ae209701dadc" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:09.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-1981" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":37,"skipped":675,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-3553-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":26,"skipped":333,"failed":0}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:14.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4588" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":678,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:22.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-8167" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":27,"skipped":345,"failed":0}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:27.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-3313" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":364,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:56:27.167: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide host IP as an env var [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  9 20:56:27.204: INFO: Waiting up to 5m0s for pod "downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb" in namespace "downward-api-6564" to be "Succeeded or Failed"

    Sep  9 20:56:27.212: INFO: Pod "downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013868ms
    Sep  9 20:56:29.216: INFO: Pod "downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012681958s
    STEP: Saw pod success
    Sep  9 20:56:29.217: INFO: Pod "downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb" satisfied condition "Succeeded or Failed"

    Sep  9 20:56:29.220: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 20:56:29.238: INFO: Waiting for pod downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb to disappear
    Sep  9 20:56:29.241: INFO: Pod downward-api-0f930a47-f8a3-4507-b22f-c9e97abdacfb no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:29.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6564" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":402,"failed":0}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:56:29.290: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should mount projected service account token [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test service account token: 
    Sep  9 20:56:29.326: INFO: Waiting up to 5m0s for pod "test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37" in namespace "svcaccounts-4409" to be "Succeeded or Failed"

    Sep  9 20:56:29.329: INFO: Pod "test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.345289ms
    Sep  9 20:56:31.335: INFO: Pod "test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009205482s
    STEP: Saw pod success
    Sep  9 20:56:31.335: INFO: Pod "test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37" satisfied condition "Succeeded or Failed"

    Sep  9 20:56:31.339: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:56:31.365: INFO: Waiting for pod test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37 to disappear
    Sep  9 20:56:31.368: INFO: Pod test-pod-fe6f33eb-0e86-4482-9356-a6c9e4d61b37 no longer exists
    [AfterEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:31.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-4409" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":30,"skipped":431,"failed":0}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:56:14.635: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename init-container
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
    [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod
    Sep  9 20:56:14.671: INFO: PodSpec: initContainers in spec.initContainers
    Sep  9 20:56:58.614: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1053bd08-c040-43db-ac12-ddb820e2b2d5", GenerateName:"", Namespace:"init-container-1748", SelfLink:"", UID:"fb77876e-1df7-4261-90de-8fbbf4078153", ResourceVersion:"9430", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63798353774, loc:(*time.Location)(0x9e363e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"671794436"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030646f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003064708)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003064720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003064738)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-l7mkg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003527840), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l7mkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l7mkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-l7mkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027b7fc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003389420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f20040)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f20060)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003f20068), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003f2006c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0020d5c80), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353774, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353774, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353774, loc:(*time.Location)(0x9e363e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353774, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"192.168.0.30", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.0.30"}}, StartTime:(*v1.Time)(0xc003064768), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003064780), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003389500)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://9ef132c25bbc63e8f2f4df6f0aa7fe44d8e978aaa6dc1a0bb1f6a13a2f624a3f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035278e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035278c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003f200ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}

    [AfterEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:56:58.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-1748" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":39,"skipped":687,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:01.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5670" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":688,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] InitContainer [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:04.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "init-container-7335" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":41,"skipped":730,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:04.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-3183" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":42,"skipped":736,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:57:04.886: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on tmpfs
    Sep  9 20:57:04.933: INFO: Waiting up to 5m0s for pod "pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1" in namespace "emptydir-7516" to be "Succeeded or Failed"

    Sep  9 20:57:04.938: INFO: Pod "pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179415ms
    Sep  9 20:57:06.942: INFO: Pod "pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008479507s
    STEP: Saw pod success
    Sep  9 20:57:06.942: INFO: Pod "pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1" satisfied condition "Succeeded or Failed"

    Sep  9 20:57:06.946: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1 container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:57:06.964: INFO: Waiting for pod pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1 to disappear
    Sep  9 20:57:06.967: INFO: Pod pod-5bb84681-971f-4dae-9f8f-b4287fb5bbe1 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:06.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-7516" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":760,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:57:06.986: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-16fc25fd-d7bf-406e-a93c-c824bb40697e
    STEP: Creating a pod to test consume secrets
    Sep  9 20:57:07.034: INFO: Waiting up to 5m0s for pod "pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135" in namespace "secrets-3827" to be "Succeeded or Failed"

    Sep  9 20:57:07.038: INFO: Pod "pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135": Phase="Pending", Reason="", readiness=false. Elapsed: 3.661011ms
    Sep  9 20:57:09.043: INFO: Pod "pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008918117s
    STEP: Saw pod success
    Sep  9 20:57:09.043: INFO: Pod "pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135" satisfied condition "Succeeded or Failed"

    Sep  9 20:57:09.047: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:57:09.064: INFO: Waiting for pod pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135 to disappear
    Sep  9 20:57:09.069: INFO: Pod pod-secrets-4aabde87-1fb7-46d2-afbd-2ad3cb6df135 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:09.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-3827" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":764,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:57:09.159: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override arguments
    Sep  9 20:57:09.199: INFO: Waiting up to 5m0s for pod "client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9" in namespace "containers-6670" to be "Succeeded or Failed"

    Sep  9 20:57:09.202: INFO: Pod "client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16987ms
    Sep  9 20:57:11.207: INFO: Pod "client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008437588s
    STEP: Saw pod success
    Sep  9 20:57:11.207: INFO: Pod "client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9" satisfied condition "Succeeded or Failed"

    Sep  9 20:57:11.211: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:57:11.234: INFO: Waiting for pod client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9 to disappear
    Sep  9 20:57:11.237: INFO: Pod client-containers-36adcde1-b872-4ef2-97d6-c1e7da58a4a9 no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:11.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-6670" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":808,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    Sep  9 20:57:13.677: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
    Sep  9 20:57:13.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6058 describe pod agnhost-primary-bxmk6'
    Sep  9 20:57:13.839: INFO: stderr: ""
    Sep  9 20:57:13.839: INFO: stdout: "Name:         agnhost-primary-bxmk6\nNamespace:    kubectl-6058\nPriority:     0\nNode:         k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7/172.18.0.4\nStart Time:   Fri, 09 Sep 2022 20:57:12 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           192.168.0.33\nIPs:\n  IP:           192.168.0.33\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://69a6d45f8c3cd7b2547eb81ed1d97d4643e9b530e4876ac7174953ee46857f48\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 09 Sep 2022 20:57:13 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7b4t (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-p7b4t:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-6058/agnhost-primary-bxmk6 to k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    0s    kubelet            Started container agnhost-primary\n"
    Sep  9 20:57:13.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6058 describe rc agnhost-primary'
    Sep  9 20:57:13.970: INFO: stderr: ""
    Sep  9 20:57:13.970: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-6058\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  1s    replication-controller  Created pod: agnhost-primary-bxmk6\n"

    Sep  9 20:57:13.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6058 describe service agnhost-primary'
    Sep  9 20:57:14.103: INFO: stderr: ""
    Sep  9 20:57:14.103: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-6058\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.143.25.146\nIPs:               10.143.25.146\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         192.168.0.33:6379\nSession Affinity:  None\nEvents:            <none>\n"
    Sep  9 20:57:14.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6058 describe node k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf'
    Sep  9 20:57:14.276: INFO: stderr: ""
    Sep  9 20:57:14.276: INFO: stdout: "Name:               k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations:        cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-b2vx3j\n                    cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-6xwdmz\n                    cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf\n                    cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n                    cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-b2vx3j-7tqwn\n                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 09 Sep 2022 20:39:57 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 09 Sep 2022 20:57:10 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 09 Sep 2022 20:55:43 +0000   Fri, 09 Sep 2022 20:39:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 09 Sep 2022 20:55:43 +0000   Fri, 09 Sep 2022 20:39:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 09 Sep 2022 20:55:43 +0000   Fri, 09 Sep 2022 20:39:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 09 Sep 2022 20:55:43 +0000   Fri, 09 Sep 2022 20:40:39 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65860676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 86e6d78daecb4e10a7efe5ba16e07209\n  System UUID:                def06b6b-28c8-4420-9269-f8a4f0d0ac76\n  Boot ID:                    0253b85b-1aab-40c9-a6fb-15e587ea718e\n  Kernel Version:             5.4.0-1076-gke\n  OS Image:                   Ubuntu 22.04.1 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.6.7\n  Kubelet Version:            v1.21.14\n  Kube-Proxy Version:         v1.21.14\nPodCIDR:                      192.168.5.0/24\nPodCIDRs:                     192.168.5.0/24\nProviderID:                   docker:////k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf\nNon-terminated Pods:          (6 in total)\n  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m\n  kube-system                 kindnet-2m297                                                             100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m\n  kube-system                 kube-apiserver-k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-controller-manager-k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m\n  kube-system                 kube-proxy-l8rjz                                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m\n  kube-system                 kube-scheduler-k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                750m (9%)   100m (1%)\n  memory             150Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                From        Message\n  ----     ------                    ----               ----        -------\n  Normal   Starting                  17m                kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity       17m                kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory   17m (x2 over 17m)  kubelet     Node k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     17m (x2 over 17m)  kubelet     Node k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      17m (x2 over 17m)  kubelet     Node k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   17m                kubelet     Updated Node Allocatable limit across pods\n  Warning  CheckLimitsForResolvConf  17m                kubelet     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   Starting                  16m                kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                 16m                kubelet     Node k8s-upgrade-and-conformance-b2vx3j-7tqwn-xcslf status is now: NodeReady\n  Normal   Starting                  14m                kube-proxy  Starting kube-proxy.\n"
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:14.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6058" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":46,"skipped":809,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
    STEP: Destroying namespace "services-7308" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":47,"skipped":830,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:31.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-2341" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":48,"skipped":866,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Discovery
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:32.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "discovery-8239" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":49,"skipped":882,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-8272" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":50,"skipped":892,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:57:48.932: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on node default medium
    Sep  9 20:57:48.977: INFO: Waiting up to 5m0s for pod "pod-4e674c86-8a81-40eb-92a3-af68b665d814" in namespace "emptydir-1241" to be "Succeeded or Failed"

    Sep  9 20:57:48.981: INFO: Pod "pod-4e674c86-8a81-40eb-92a3-af68b665d814": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873939ms
    Sep  9 20:57:50.985: INFO: Pod "pod-4e674c86-8a81-40eb-92a3-af68b665d814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00760949s
    STEP: Saw pod success
    Sep  9 20:57:50.985: INFO: Pod "pod-4e674c86-8a81-40eb-92a3-af68b665d814" satisfied condition "Succeeded or Failed"

    Sep  9 20:57:50.988: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-4e674c86-8a81-40eb-92a3-af68b665d814 container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:57:51.005: INFO: Waiting for pod pod-4e674c86-8a81-40eb-92a3-af68b665d814 to disappear
    Sep  9 20:57:51.009: INFO: Pod pod-4e674c86-8a81-40eb-92a3-af68b665d814 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:51.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1241" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":983,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:57:57.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-1603" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":52,"skipped":985,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 37 lines ...
    Sep  9 20:58:03.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1597 explain e2e-test-crd-publish-openapi-7694-crds.spec'
    Sep  9 20:58:03.620: INFO: stderr: ""
    Sep  9 20:58:03.620: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7694-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
    Sep  9 20:58:03.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1597 explain e2e-test-crd-publish-openapi-7694-crds.spec.bars'
    Sep  9 20:58:03.882: INFO: stderr: ""
    Sep  9 20:58:03.882: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7694-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
    STEP: kubectl explain works to return error when explain is called on property that doesn't exist

    Sep  9 20:58:03.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1597 explain e2e-test-crd-publish-openapi-7694-crds.spec.bars2'
    Sep  9 20:58:04.144: INFO: rc: 1
    [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:06.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-1597" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":53,"skipped":1006,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:06.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-952" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":54,"skipped":1009,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:12.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-8300" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":55,"skipped":1010,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:58:12.892: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename crd-publish-openapi
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:19.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-8606" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":56,"skipped":1010,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":220,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:54:07.285: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename services
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 69 lines ...
    • [SLOW TEST:279.950 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":220,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:47.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-2059" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":15,"skipped":244,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 20:58:47.550: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a" in namespace "projected-4544" to be "Succeeded or Failed"

    Sep  9 20:58:47.557: INFO: Pod "downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.949771ms
    Sep  9 20:58:49.561: INFO: Pod "downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010387658s
    STEP: Saw pod success
    Sep  9 20:58:49.561: INFO: Pod "downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a" satisfied condition "Succeeded or Failed"

    Sep  9 20:58:49.564: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a container client-container: <nil>
    STEP: delete the pod
    Sep  9 20:58:49.591: INFO: Waiting for pod downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a to disappear
    Sep  9 20:58:49.594: INFO: Pod downwardapi-volume-9f9a9669-3388-46b9-8d9e-fa0d9c4f725a no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:49.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4544" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":245,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-instrumentation] Events
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:58:49.605: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename events
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:58:49.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-2838" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":17,"skipped":245,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    Sep  9 20:58:51.820: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:51.823: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:51.858: INFO: Unable to read jessie_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:51.862: INFO: Unable to read jessie_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:51.868: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:51.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:51.907: INFO: Lookups using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 failed for: [wheezy_udp@dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_udp@dns-test-service.dns-9966.svc.cluster.local jessie_tcp@dns-test-service.dns-9966.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local]

    
    Sep  9 20:58:56.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.923: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.927: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.956: INFO: Unable to read jessie_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.965: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:58:56.994: INFO: Lookups using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 failed for: [wheezy_udp@dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_udp@dns-test-service.dns-9966.svc.cluster.local jessie_tcp@dns-test-service.dns-9966.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local]

    
    Sep  9 20:59:01.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.926: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.931: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.965: INFO: Unable to read jessie_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.974: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:01.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:02.005: INFO: Lookups using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 failed for: [wheezy_udp@dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_udp@dns-test-service.dns-9966.svc.cluster.local jessie_tcp@dns-test-service.dns-9966.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local]

    
    Sep  9 20:59:06.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.923: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.966: INFO: Unable to read jessie_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:06.979: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:07.012: INFO: Lookups using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 failed for: [wheezy_udp@dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_udp@dns-test-service.dns-9966.svc.cluster.local jessie_tcp@dns-test-service.dns-9966.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local]

    
    Sep  9 20:59:11.914: INFO: Unable to read wheezy_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.922: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.926: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.951: INFO: Unable to read jessie_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.958: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.962: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:11.984: INFO: Lookups using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 failed for: [wheezy_udp@dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_udp@dns-test-service.dns-9966.svc.cluster.local jessie_tcp@dns-test-service.dns-9966.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local]

    
    Sep  9 20:59:16.912: INFO: Unable to read wheezy_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.918: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.923: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.959: INFO: Unable to read jessie_udp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.964: INFO: Unable to read jessie_tcp@dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.970: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.974: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local from pod dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311: the server could not find the requested resource (get pods dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311)
    Sep  9 20:59:16.999: INFO: Lookups using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 failed for: [wheezy_udp@dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@dns-test-service.dns-9966.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_udp@dns-test-service.dns-9966.svc.cluster.local jessie_tcp@dns-test-service.dns-9966.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9966.svc.cluster.local]

    
    Sep  9 20:59:21.991: INFO: DNS probes using dns-9966/dns-test-b11f86fe-9ce5-46c6-ac54-e5c3858ca311 succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:22.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9966" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":18,"skipped":265,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:59:22.161: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-cfd27e19-a697-4bd7-858e-ed5f354a2633
    STEP: Creating a pod to test consume configMaps
    Sep  9 20:59:22.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186" in namespace "configmap-7212" to be "Succeeded or Failed"

    Sep  9 20:59:22.237: INFO: Pod "pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.939597ms
    Sep  9 20:59:24.243: INFO: Pod "pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012844668s
    STEP: Saw pod success
    Sep  9 20:59:24.243: INFO: Pod "pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186" satisfied condition "Succeeded or Failed"

    Sep  9 20:59:24.248: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 20:59:24.282: INFO: Waiting for pod pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186 to disappear
    Sep  9 20:59:24.287: INFO: Pod pod-configmaps-be9a8086-35e4-4624-9eb5-5d9fa485a186 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:24.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7212" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":286,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:59:24.355: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-map-72e5ac21-e13e-4a59-b64a-127c166b6b99
    STEP: Creating a pod to test consume secrets
    Sep  9 20:59:24.418: INFO: Waiting up to 5m0s for pod "pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d" in namespace "secrets-7737" to be "Succeeded or Failed"

    Sep  9 20:59:24.422: INFO: Pod "pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16344ms
    Sep  9 20:59:26.427: INFO: Pod "pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009439516s
    STEP: Saw pod success
    Sep  9 20:59:26.427: INFO: Pod "pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d" satisfied condition "Succeeded or Failed"

    Sep  9 20:59:26.430: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 20:59:26.450: INFO: Waiting for pod pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d to disappear
    Sep  9 20:59:26.453: INFO: Pod pod-secrets-e0028da1-1d7e-488e-9828-5656b670227d no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:26.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-7737" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":312,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:59:26.485: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  9 20:59:26.532: INFO: Waiting up to 5m0s for pod "pod-5d31a54b-d4ed-4419-8619-645498b1564c" in namespace "emptydir-2283" to be "Succeeded or Failed"

    Sep  9 20:59:26.535: INFO: Pod "pod-5d31a54b-d4ed-4419-8619-645498b1564c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612159ms
    Sep  9 20:59:28.540: INFO: Pod "pod-5d31a54b-d4ed-4419-8619-645498b1564c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00786616s
    STEP: Saw pod success
    Sep  9 20:59:28.540: INFO: Pod "pod-5d31a54b-d4ed-4419-8619-645498b1564c" satisfied condition "Succeeded or Failed"

    Sep  9 20:59:28.544: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-5d31a54b-d4ed-4419-8619-645498b1564c container test-container: <nil>
    STEP: delete the pod
    Sep  9 20:59:28.565: INFO: Waiting for pod pod-5d31a54b-d4ed-4419-8619-645498b1564c to disappear
    Sep  9 20:59:28.568: INFO: Pod pod-5d31a54b-d4ed-4419-8619-645498b1564c no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:28.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2283" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":322,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:32.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3340" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":374,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:38.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-5992" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":23,"skipped":408,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 20:59:39.006: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  9 20:59:39.063: INFO: Waiting up to 5m0s for pod "downward-api-16373d11-57ea-4a93-9919-075ac8913b9c" in namespace "downward-api-1577" to be "Succeeded or Failed"

    Sep  9 20:59:39.067: INFO: Pod "downward-api-16373d11-57ea-4a93-9919-075ac8913b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.879218ms
    Sep  9 20:59:41.072: INFO: Pod "downward-api-16373d11-57ea-4a93-9919-075ac8913b9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009049062s
    STEP: Saw pod success
    Sep  9 20:59:41.072: INFO: Pod "downward-api-16373d11-57ea-4a93-9919-075ac8913b9c" satisfied condition "Succeeded or Failed"

    Sep  9 20:59:41.076: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downward-api-16373d11-57ea-4a93-9919-075ac8913b9c container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 20:59:41.097: INFO: Waiting for pod downward-api-16373d11-57ea-4a93-9919-075ac8913b9c to disappear
    Sep  9 20:59:41.100: INFO: Pod downward-api-16373d11-57ea-4a93-9919-075ac8913b9c no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 20:59:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1577" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":434,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:00:09.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6540" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":25,"skipped":439,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:00:16.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-6619" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":26,"skipped":443,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:00:16.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "tables-8396" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":27,"skipped":463,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:00:16.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd" in namespace "projected-7953" to be "Succeeded or Failed"

    Sep  9 21:00:16.589: INFO: Pod "downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.175086ms
    Sep  9 21:00:18.593: INFO: Pod "downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014878393s
    STEP: Saw pod success
    Sep  9 21:00:18.594: INFO: Pod "downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd" satisfied condition "Succeeded or Failed"

    Sep  9 21:00:18.598: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:00:18.630: INFO: Waiting for pod downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd to disappear
    Sep  9 21:00:18.633: INFO: Pod downwardapi-volume-dcc8002f-7861-41c4-8e54-c1f64365d7fd no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:00:18.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7953" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":512,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:00:18.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-1427" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":29,"skipped":526,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:00:18.808: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-map-6b832acf-9aed-4c06-a83e-4f35f17acd64
    STEP: Creating a pod to test consume secrets
    Sep  9 21:00:18.861: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715" in namespace "projected-8579" to be "Succeeded or Failed"

    Sep  9 21:00:18.865: INFO: Pod "pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715": Phase="Pending", Reason="", readiness=false. Elapsed: 4.770939ms
    Sep  9 21:00:20.870: INFO: Pod "pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009690843s
    STEP: Saw pod success
    Sep  9 21:00:20.870: INFO: Pod "pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715" satisfied condition "Succeeded or Failed"

    Sep  9 21:00:20.875: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 21:00:20.896: INFO: Waiting for pod pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715 to disappear
    Sep  9 21:00:20.899: INFO: Pod pod-projected-secrets-bfd131c8-9470-4291-bdbf-1063f6ff3715 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:00:20.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-8579" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":551,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:01.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-9853" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":31,"skipped":560,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:05.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-1057" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":32,"skipped":561,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Creating replication controller my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9
    Sep  9 20:53:55.071: INFO: Pod name my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9: Found 0 pods out of 1
    Sep  9 20:54:00.081: INFO: Pod name my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9: Found 1 pods out of 1
    Sep  9 20:54:00.081: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9" are running
    Sep  9 20:54:00.086: INFO: Pod "my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9-ckfbl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-09 20:53:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-09 20:53:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-09 20:53:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-09-09 20:53:55 +0000 UTC Reason: Message:}])
    Sep  9 20:54:00.087: INFO: Trying to dial the pod
    Sep  9 20:57:39.805: INFO: Controller my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9: Failed to GET from replica 1 [my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9-ckfbl]: the server is currently unable to handle the request (get pods my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9-ckfbl)

    pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353635, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353636, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353636, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353635, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.35", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.35"}}, StartTime:(*v1.Time)(0xc003dde198), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003dde1b0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1", ContainerID:"containerd://f1d0cc3468f519d287ec41f8ac8e82c65da13098631584039377c97372bf617f", Started:(*bool)(0xc003bc635a)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
    Sep  9 21:01:12.794: INFO: Controller my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9: Failed to GET from replica 1 [my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9-ckfbl]: the server is currently unable to handle the request (get pods my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9-ckfbl)

    pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353635, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353636, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353636, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63798353635, loc:(*time.Location)(0x9e363e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.35", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.35"}}, StartTime:(*v1.Time)(0xc003dde198), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-c9feaeb0-a476-4dc8-9368-fd95870380f9", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003dde1b0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1", ContainerID:"containerd://f1d0cc3468f519d287ec41f8ac8e82c65da13098631584039377c97372bf617f", Started:(*bool)(0xc003bc635a)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
    Sep  9 21:01:12.794: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/apps.glob..func8.2()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002a9ad80)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:13.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "limitrange-4075" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":33,"skipped":599,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:13.032: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on node default medium
    Sep  9 21:01:13.074: INFO: Waiting up to 5m0s for pod "pod-64adef39-b17c-4944-9252-2c9f88d9f1ac" in namespace "emptydir-8669" to be "Succeeded or Failed"

    Sep  9 21:01:13.080: INFO: Pod "pod-64adef39-b17c-4944-9252-2c9f88d9f1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.724238ms
    Sep  9 21:01:15.084: INFO: Pod "pod-64adef39-b17c-4944-9252-2c9f88d9f1ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008928438s
    STEP: Saw pod success
    Sep  9 21:01:15.084: INFO: Pod "pod-64adef39-b17c-4944-9252-2c9f88d9f1ac" satisfied condition "Succeeded or Failed"

    Sep  9 21:01:15.089: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod pod-64adef39-b17c-4944-9252-2c9f88d9f1ac container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:01:15.117: INFO: Waiting for pod pod-64adef39-b17c-4944-9252-2c9f88d9f1ac to disappear
    Sep  9 21:01:15.122: INFO: Pod pod-64adef39-b17c-4944-9252-2c9f88d9f1ac no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:15.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8669" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":602,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:15.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-1029" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":35,"skipped":611,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    {"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":20,"skipped":306,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:12.809: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename replication-controller
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:22.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-4360" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":21,"skipped":306,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:23.027: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0666 on tmpfs
    Sep  9 21:01:23.094: INFO: Waiting up to 5m0s for pod "pod-0bc028ff-698d-40be-be58-593eaa681f64" in namespace "emptydir-2537" to be "Succeeded or Failed"

    Sep  9 21:01:23.105: INFO: Pod "pod-0bc028ff-698d-40be-be58-593eaa681f64": Phase="Pending", Reason="", readiness=false. Elapsed: 11.496242ms
    Sep  9 21:01:25.110: INFO: Pod "pod-0bc028ff-698d-40be-be58-593eaa681f64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016470171s
    STEP: Saw pod success
    Sep  9 21:01:25.110: INFO: Pod "pod-0bc028ff-698d-40be-be58-593eaa681f64" satisfied condition "Succeeded or Failed"

    Sep  9 21:01:25.114: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-0bc028ff-698d-40be-be58-593eaa681f64 container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:01:25.131: INFO: Waiting for pod pod-0bc028ff-698d-40be-be58-593eaa681f64 to disappear
    Sep  9 21:01:25.140: INFO: Pod pod-0bc028ff-698d-40be-be58-593eaa681f64 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:25.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-2537" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":350,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Service endpoints latency
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 418 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:26.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svc-latency-7262" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":36,"skipped":616,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:26.068: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  9 21:01:26.108: INFO: Waiting up to 5m0s for pod "downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac" in namespace "downward-api-7019" to be "Succeeded or Failed"

    Sep  9 21:01:26.111: INFO: Pod "downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.527521ms
    Sep  9 21:01:28.120: INFO: Pod "downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012015846s
    Sep  9 21:01:30.125: INFO: Pod "downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017471451s
    STEP: Saw pod success
    Sep  9 21:01:30.125: INFO: Pod "downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac" satisfied condition "Succeeded or Failed"

    Sep  9 21:01:30.129: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 21:01:30.146: INFO: Waiting for pod downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac to disappear
    Sep  9 21:01:30.150: INFO: Pod downward-api-18a7eaf2-30df-46db-a14a-fe2739dae3ac no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7019" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":625,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PreStop
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:34.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "prestop-9449" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":23,"skipped":356,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:34.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-7041" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":24,"skipped":360,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:34.590: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap configmap-9647/configmap-test-7e4e41fd-d876-405c-a06a-09ab4a9802ae
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:01:34.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2" in namespace "configmap-9647" to be "Succeeded or Failed"

    Sep  9 21:01:34.683: INFO: Pod "pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.360189ms
    Sep  9 21:01:36.696: INFO: Pod "pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017574496s
    STEP: Saw pod success
    Sep  9 21:01:36.696: INFO: Pod "pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2" satisfied condition "Succeeded or Failed"

    Sep  9 21:01:36.709: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2 container env-test: <nil>
    STEP: delete the pod
    Sep  9 21:01:36.746: INFO: Waiting for pod pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2 to disappear
    Sep  9 21:01:36.757: INFO: Pod pod-configmaps-f39f646f-d540-4e6f-8be0-a033c18d6bc2 no longer exists
    [AfterEach] [sig-node] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:36.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-9647" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":363,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  9 21:00:13.401: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-9330.svc.cluster.local from pod dns-9330/dns-test-39b28638-d3da-45ab-9ca8-1278a01a2109: the server is currently unable to handle the request (get pods dns-test-39b28638-d3da-45ab-9ca8-1278a01a2109)
    Sep  9 21:01:39.451: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9330/dns-test-39b28638-d3da-45ab-9ca8-1278a01a2109: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9330/pods/dns-test-39b28638-d3da-45ab-9ca8-1278a01a2109/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000c47c68, 0xc0031fdd68, 0xc000c47c68, 0xc0031fdd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc003bd5980, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0909 21:01:39.454037      16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  9 21:01:39.452: Unable to read wheezy_hosts@dns-querier-1 from pod dns-9330/dns-test-39b28638-d3da-45ab-9ca8-1278a01a2109: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-9330/pods/dns-test-39b28638-d3da-45ab-9ca8-1278a01a2109/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000c47c68, 0xc0031fdd68, 0xc000c47c68, 0xc0031fdd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0031fdd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001491880, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc003a6c400, 0x77b8c18, 0xc0036e7340, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000f19080, 0xc003a6c400, 0xc001491880, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc003bd5980)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc003bd5980)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc003bd5980, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc0036580c0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc0036580c0)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002ec6140, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc001656800, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002ec6140, 0x12f, 0xc0031fd7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002ec6140, 0x12f, 0xc0031fd890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc0031fdaf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc000c47c00, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)
... skipping 93 lines ...
    STEP: Destroying namespace "webhook-8382-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":38,"skipped":641,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:01:42.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-7908" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":39,"skipped":659,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 65 lines ...
    STEP: Destroying namespace "services-9319" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":399,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-1281" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":27,"skipped":434,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:02:05.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8" in namespace "downward-api-826" to be "Succeeded or Failed"

    Sep  9 21:02:06.001: INFO: Pod "downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.554927ms
    Sep  9 21:02:08.006: INFO: Pod "downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008822005s
    STEP: Saw pod success
    Sep  9 21:02:08.006: INFO: Pod "downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8" satisfied condition "Succeeded or Failed"

    Sep  9 21:02:08.010: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:02:08.032: INFO: Waiting for pod downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8 to disappear
    Sep  9 21:02:08.035: INFO: Pod downwardapi-volume-51f7ed49-8ba7-4ac8-8591-d75ba23bc4a8 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:08.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-826" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":457,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:02:08.120: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  9 21:02:08.183: INFO: Waiting up to 5m0s for pod "pod-6ce9b707-650c-4923-8002-373a34fff15a" in namespace "emptydir-1481" to be "Succeeded or Failed"

    Sep  9 21:02:08.188: INFO: Pod "pod-6ce9b707-650c-4923-8002-373a34fff15a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.770227ms
    Sep  9 21:02:10.192: INFO: Pod "pod-6ce9b707-650c-4923-8002-373a34fff15a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009024922s
    STEP: Saw pod success
    Sep  9 21:02:10.192: INFO: Pod "pod-6ce9b707-650c-4923-8002-373a34fff15a" satisfied condition "Succeeded or Failed"

    Sep  9 21:02:10.195: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-6ce9b707-650c-4923-8002-373a34fff15a container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:02:10.217: INFO: Waiting for pod pod-6ce9b707-650c-4923-8002-373a34fff15a to disappear
    Sep  9 21:02:10.220: INFO: Pod pod-6ce9b707-650c-4923-8002-373a34fff15a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:10.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-1481" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":485,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:17.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-6080" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":30,"skipped":506,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:17.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-1361" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":31,"skipped":518,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:02:18.065: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-d9312fe8-9605-4eb1-92a1-3581f58eea69
    STEP: Creating a pod to test consume secrets
    Sep  9 21:02:18.109: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87" in namespace "projected-1706" to be "Succeeded or Failed"

    Sep  9 21:02:18.112: INFO: Pod "pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.002835ms
    Sep  9 21:02:20.117: INFO: Pod "pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008306951s
    STEP: Saw pod success
    Sep  9 21:02:20.117: INFO: Pod "pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87" satisfied condition "Succeeded or Failed"

    Sep  9 21:02:20.121: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87 container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 21:02:20.138: INFO: Waiting for pod pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87 to disappear
    Sep  9 21:02:20.141: INFO: Pod pod-projected-secrets-18171457-564f-4d7f-a50d-c4dd2cb9ce87 no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:20.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1706" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":558,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
    • [SLOW TEST:242.783 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1039,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:22.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-504" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":58,"skipped":1064,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:02:22.848: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename containers
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test override command
    Sep  9 21:02:22.908: INFO: Waiting up to 5m0s for pod "client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df" in namespace "containers-5150" to be "Succeeded or Failed"

    Sep  9 21:02:22.920: INFO: Pod "client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df": Phase="Pending", Reason="", readiness=false. Elapsed: 11.184373ms
    Sep  9 21:02:24.924: INFO: Pod "client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016090331s
    STEP: Saw pod success
    Sep  9 21:02:24.925: INFO: Pod "client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df" satisfied condition "Succeeded or Failed"

    Sep  9 21:02:24.928: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:02:24.959: INFO: Waiting for pod client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df to disappear
    Sep  9 21:02:24.962: INFO: Pod client-containers-a362cbfb-7619-487f-87f8-8f09d1d1a3df no longer exists
    [AfterEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:24.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-5150" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1069,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSliceMirroring
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:31.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslicemirroring-3529" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":60,"skipped":1079,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve multiport endpoints from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service multi-endpoint-test in namespace services-2552
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2552 to expose endpoints map[]
    Sep  9 21:02:31.213: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found

    Sep  9 21:02:32.223: INFO: successfully validated that service multi-endpoint-test in namespace services-2552 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-2552
    Sep  9 21:02:32.235: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  9 21:02:34.242: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2552 to expose endpoints map[pod1:[100]]
    Sep  9 21:02:34.260: INFO: successfully validated that service multi-endpoint-test in namespace services-2552 exposes endpoints map[pod1:[100]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-2552" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":61,"skipped":1099,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:38.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-1042" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":603,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:02:36.433: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  9 21:02:36.493: INFO: Waiting up to 5m0s for pod "security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57" in namespace "security-context-2515" to be "Succeeded or Failed"

    Sep  9 21:02:36.507: INFO: Pod "security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57": Phase="Pending", Reason="", readiness=false. Elapsed: 13.231853ms
    Sep  9 21:02:38.513: INFO: Pod "security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019979077s
    STEP: Saw pod success
    Sep  9 21:02:38.513: INFO: Pod "security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57" satisfied condition "Succeeded or Failed"

    Sep  9 21:02:38.518: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57 container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:02:38.540: INFO: Waiting for pod security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57 to disappear
    Sep  9 21:02:38.545: INFO: Pod security-context-faec57fb-183d-4c19-9ad8-2bc23b4d2f57 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:38.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-2515" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":62,"skipped":1100,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:02:38.558: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
    Sep  9 21:02:38.607: INFO: Waiting up to 5m0s for pod "security-context-eb51de2b-3858-4b46-a971-32c1e41ad186" in namespace "security-context-6762" to be "Succeeded or Failed"

    Sep  9 21:02:38.611: INFO: Pod "security-context-eb51de2b-3858-4b46-a971-32c1e41ad186": Phase="Pending", Reason="", readiness=false. Elapsed: 3.748236ms
    Sep  9 21:02:40.616: INFO: Pod "security-context-eb51de2b-3858-4b46-a971-32c1e41ad186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008932213s
    STEP: Saw pod success
    Sep  9 21:02:40.616: INFO: Pod "security-context-eb51de2b-3858-4b46-a971-32c1e41ad186" satisfied condition "Succeeded or Failed"

    Sep  9 21:02:40.620: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod security-context-eb51de2b-3858-4b46-a971-32c1e41ad186 container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:02:40.640: INFO: Waiting for pod security-context-eb51de2b-3858-4b46-a971-32c1e41ad186 to disappear
    Sep  9 21:02:40.647: INFO: Pod security-context-eb51de2b-3858-4b46-a971-32c1e41ad186 no longer exists
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:40.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-6762" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":63,"skipped":1100,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    Sep  9 21:02:42.795: INFO: The status of Pod pod-update-activedeadlineseconds-1088dee0-08ff-4b3d-bc3a-c1c29e65f546 is Running (Ready = true)
    STEP: verifying the pod is in kubernetes
    STEP: updating the pod
    Sep  9 21:02:43.316: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1088dee0-08ff-4b3d-bc3a-c1c29e65f546"
    Sep  9 21:02:43.316: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1088dee0-08ff-4b3d-bc3a-c1c29e65f546" in namespace "pods-128" to be "terminated due to deadline exceeded"
    Sep  9 21:02:43.321: INFO: Pod "pod-update-activedeadlineseconds-1088dee0-08ff-4b3d-bc3a-c1c29e65f546": Phase="Running", Reason="", readiness=true. Elapsed: 5.015507ms
    Sep  9 21:02:45.327: INFO: Pod "pod-update-activedeadlineseconds-1088dee0-08ff-4b3d-bc3a-c1c29e65f546": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010960665s

    Sep  9 21:02:45.327: INFO: Pod "pod-update-activedeadlineseconds-1088dee0-08ff-4b3d-bc3a-c1c29e65f546" satisfied condition "terminated due to deadline exceeded"
    [AfterEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:45.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-128" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1132,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] HostPort
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:53.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "hostport-3358" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":612,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:55.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2546" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":659,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:02:56.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7159" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":36,"skipped":685,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:02:56.210: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename job
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a job
    STEP: Ensuring job reaches completions
    [AfterEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:03:02.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-5715" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":37,"skipped":692,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 51 lines ...
    STEP: Destroying namespace "services-5544" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":38,"skipped":716,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:03:23.289: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 21:03:23.350: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-72eb801a-a9ec-4b1b-9f66-28040380d0b3" in namespace "security-context-test-8635" to be "Succeeded or Failed"

    Sep  9 21:03:23.355: INFO: Pod "alpine-nnp-false-72eb801a-a9ec-4b1b-9f66-28040380d0b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.830773ms
    Sep  9 21:03:25.359: INFO: Pod "alpine-nnp-false-72eb801a-a9ec-4b1b-9f66-28040380d0b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009458459s
    Sep  9 21:03:25.359: INFO: Pod "alpine-nnp-false-72eb801a-a9ec-4b1b-9f66-28040380d0b3" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:03:25.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-8635" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":743,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:03:27.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-1797" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":773,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 336 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:03:33.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-2866" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":41,"skipped":778,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:03:33.460: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:03:34.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-1990" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":42,"skipped":778,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:03:34.577: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-ac2426b2-dd88-44d9-b624-443f3f0b40fc
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:03:34.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53" in namespace "projected-2956" to be "Succeeded or Failed"

    Sep  9 21:03:34.630: INFO: Pod "pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.272923ms
    Sep  9 21:03:36.635: INFO: Pod "pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008082778s
    STEP: Saw pod success
    Sep  9 21:03:36.635: INFO: Pod "pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53" satisfied condition "Succeeded or Failed"

    Sep  9 21:03:36.639: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:03:36.662: INFO: Waiting for pod pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53 to disappear
    Sep  9 21:03:36.666: INFO: Pod pod-projected-configmaps-a1a1421a-569b-4af2-9b5d-13739ca1fd53 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:03:36.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-2956" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":803,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    Sep  9 21:03:38.879: INFO: Unable to read jessie_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:38.884: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:38.890: INFO: Unable to read jessie_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:38.898: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:38.904: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:38.908: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:38.948: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7478 wheezy_tcp@dns-test-service.dns-7478 wheezy_udp@dns-test-service.dns-7478.svc wheezy_tcp@dns-test-service.dns-7478.svc wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7478 jessie_tcp@dns-test-service.dns-7478 jessie_udp@dns-test-service.dns-7478.svc jessie_tcp@dns-test-service.dns-7478.svc jessie_udp@_http._tcp.dns-test-service.dns-7478.svc jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:03:43.955: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:43.959: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:43.963: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:43.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:43.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
... skipping 5 lines ...
    Sep  9 21:03:44.026: INFO: Unable to read jessie_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:44.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:44.035: INFO: Unable to read jessie_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:44.038: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:44.043: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:44.046: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:44.070: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7478 wheezy_tcp@dns-test-service.dns-7478 wheezy_udp@dns-test-service.dns-7478.svc wheezy_tcp@dns-test-service.dns-7478.svc wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7478 jessie_tcp@dns-test-service.dns-7478 jessie_udp@dns-test-service.dns-7478.svc jessie_tcp@dns-test-service.dns-7478.svc jessie_udp@_http._tcp.dns-test-service.dns-7478.svc jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:03:48.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:48.959: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:48.964: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:48.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:48.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
... skipping 5 lines ...
    Sep  9 21:03:49.038: INFO: Unable to read jessie_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:49.042: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:49.047: INFO: Unable to read jessie_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:49.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:49.058: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:49.066: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:49.106: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7478 wheezy_tcp@dns-test-service.dns-7478 wheezy_udp@dns-test-service.dns-7478.svc wheezy_tcp@dns-test-service.dns-7478.svc wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7478 jessie_tcp@dns-test-service.dns-7478 jessie_udp@dns-test-service.dns-7478.svc jessie_tcp@dns-test-service.dns-7478.svc jessie_udp@_http._tcp.dns-test-service.dns-7478.svc jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:03:53.955: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:53.962: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:53.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:53.969: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:53.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
... skipping 5 lines ...
    Sep  9 21:03:54.025: INFO: Unable to read jessie_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:54.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:54.035: INFO: Unable to read jessie_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:54.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:54.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:54.050: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:54.076: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7478 wheezy_tcp@dns-test-service.dns-7478 wheezy_udp@dns-test-service.dns-7478.svc wheezy_tcp@dns-test-service.dns-7478.svc wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7478 jessie_tcp@dns-test-service.dns-7478 jessie_udp@dns-test-service.dns-7478.svc jessie_tcp@dns-test-service.dns-7478.svc jessie_udp@_http._tcp.dns-test-service.dns-7478.svc jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:03:58.955: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:58.961: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:58.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:58.973: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:58.978: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
... skipping 5 lines ...
    Sep  9 21:03:59.040: INFO: Unable to read jessie_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:59.045: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:59.049: INFO: Unable to read jessie_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:59.056: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:59.062: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:59.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:03:59.105: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7478 wheezy_tcp@dns-test-service.dns-7478 wheezy_udp@dns-test-service.dns-7478.svc wheezy_tcp@dns-test-service.dns-7478.svc wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7478 jessie_tcp@dns-test-service.dns-7478 jessie_udp@dns-test-service.dns-7478.svc jessie_tcp@dns-test-service.dns-7478.svc jessie_udp@_http._tcp.dns-test-service.dns-7478.svc jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:04:03.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:03.959: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:03.965: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:03.969: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:03.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
... skipping 5 lines ...
    Sep  9 21:04:04.036: INFO: Unable to read jessie_udp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:04.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478 from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:04.045: INFO: Unable to read jessie_udp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:04.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:04.054: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:04.059: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:04.087: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7478 wheezy_tcp@dns-test-service.dns-7478 wheezy_udp@dns-test-service.dns-7478.svc wheezy_tcp@dns-test-service.dns-7478.svc wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7478 jessie_tcp@dns-test-service.dns-7478 jessie_udp@dns-test-service.dns-7478.svc jessie_tcp@dns-test-service.dns-7478.svc jessie_udp@_http._tcp.dns-test-service.dns-7478.svc jessie_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:04:08.988: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:08.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc from pod dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f: the server could not find the requested resource (get pods dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f)
    Sep  9 21:04:09.108: INFO: Lookups using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7478.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7478.svc]

    
    Sep  9 21:04:14.068: INFO: DNS probes using dns-7478/dns-test-75e1fe23-2427-4163-b72f-a5cd3cfea08f succeeded
    
    STEP: deleting the pod
    STEP: deleting the test service
    STEP: deleting the test headless service
    [AfterEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:14.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-7478" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":44,"skipped":805,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Watchers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:14.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "watch-2806" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":45,"skipped":829,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:04:14.442: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63" in namespace "downward-api-1404" to be "Succeeded or Failed"

    Sep  9 21:04:14.447: INFO: Pod "downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.672704ms
    Sep  9 21:04:16.452: INFO: Pod "downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009756164s
    STEP: Saw pod success
    Sep  9 21:04:16.452: INFO: Pod "downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:16.456: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:04:16.474: INFO: Waiting for pod downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63 to disappear
    Sep  9 21:04:16.479: INFO: Pod downwardapi-volume-d199f555-209f-4520-9ba5-df7cfb4c6a63 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:16.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1404" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":842,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:43.009: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating the pod with failed condition

    STEP: updating the pod
    Sep  9 21:03:43.588: INFO: Successfully updated pod "var-expansion-4698a30d-7703-4549-b96c-6970b2be9d3a"
    STEP: waiting for pod running
    STEP: deleting the pod gracefully
    Sep  9 21:03:45.598: INFO: Deleting pod "var-expansion-4698a30d-7703-4549-b96c-6970b2be9d3a" in namespace "var-expansion-4565"
    Sep  9 21:03:45.606: INFO: Wait up to 5m0s for pod "var-expansion-4698a30d-7703-4549-b96c-6970b2be9d3a" to be fully deleted
... skipping 6 lines ...
    • [SLOW TEST:154.620 seconds]
    [sig-node] Variable Expansion
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":40,"skipped":661,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:26.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-8833" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":47,"skipped":852,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-4012-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":48,"skipped":885,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:04:30.486: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on tmpfs
    Sep  9 21:04:30.552: INFO: Waiting up to 5m0s for pod "pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a" in namespace "emptydir-4947" to be "Succeeded or Failed"

    Sep  9 21:04:30.557: INFO: Pod "pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.82782ms
    Sep  9 21:04:32.563: INFO: Pod "pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010616308s
    STEP: Saw pod success
    Sep  9 21:04:32.563: INFO: Pod "pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:32.566: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:04:32.588: INFO: Waiting for pod pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a to disappear
    Sep  9 21:04:32.590: INFO: Pod pod-5c805813-9c77-4877-81f6-bcfb0ae7ba1a no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:32.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4947" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":888,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:33.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6419" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":41,"skipped":698,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:04:32.623: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-map-39fd520e-089d-42f6-a764-0205e663d317
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:04:32.677: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427" in namespace "configmap-4620" to be "Succeeded or Failed"

    Sep  9 21:04:32.682: INFO: Pod "pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427": Phase="Pending", Reason="", readiness=false. Elapsed: 4.602568ms
    Sep  9 21:04:34.687: INFO: Pod "pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010051378s
    STEP: Saw pod success
    Sep  9 21:04:34.687: INFO: Pod "pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:34.692: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:04:34.713: INFO: Waiting for pod pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427 to disappear
    Sep  9 21:04:34.717: INFO: Pod pod-configmaps-8d9cc608-3e8d-401e-94f2-f27d2a49e427 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:34.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4620" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":897,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:04:34.730: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename custom-resource-definition
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:35.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-7530" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":51,"skipped":897,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:04:35.845: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating secret with name secret-test-81260ab9-ff59-4719-a7f1-fe42a98acf69
    STEP: Creating a pod to test consume secrets
    Sep  9 21:04:35.930: INFO: Waiting up to 5m0s for pod "pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8" in namespace "secrets-8194" to be "Succeeded or Failed"

    Sep  9 21:04:35.933: INFO: Pod "pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.380208ms
    Sep  9 21:04:37.937: INFO: Pod "pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007908511s
    STEP: Saw pod success
    Sep  9 21:04:37.938: INFO: Pod "pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:37.941: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8 container secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 21:04:37.968: INFO: Waiting for pod pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8 to disappear
    Sep  9 21:04:37.971: INFO: Pod pod-secrets-4f4ca0c8-c46d-450c-8cc3-f73e9120a3e8 no longer exists
    [AfterEach] [sig-storage] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:37.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-8194" for this suite.
    STEP: Destroying namespace "secret-namespace-1753" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":919,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:04:38.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3" in namespace "downward-api-173" to be "Succeeded or Failed"

    Sep  9 21:04:38.045: INFO: Pod "downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.898837ms
    Sep  9 21:04:40.053: INFO: Pod "downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012021574s
    STEP: Saw pod success
    Sep  9 21:04:40.053: INFO: Pod "downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:40.059: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:04:40.101: INFO: Waiting for pod downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3 to disappear
    Sep  9 21:04:40.105: INFO: Pod downwardapi-volume-edf7ad2f-5490-4d52-b1da-f63605a616e3 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:40.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-173" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":921,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:04:40.125: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename resourcequota
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:40.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-4130" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":54,"skipped":921,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:04:40.298: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a volume subpath [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in volume subpath
    Sep  9 21:04:40.340: INFO: Waiting up to 5m0s for pod "var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68" in namespace "var-expansion-8515" to be "Succeeded or Failed"

    Sep  9 21:04:40.345: INFO: Pod "var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.591976ms
    Sep  9 21:04:42.350: INFO: Pod "var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00980945s
    STEP: Saw pod success
    Sep  9 21:04:42.350: INFO: Pod "var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:42.355: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68 container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 21:04:42.379: INFO: Waiting for pod var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68 to disappear
    Sep  9 21:04:42.382: INFO: Pod var-expansion-0efd22ab-29f2-4d9c-ba31-e8ab3fd59d68 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:42.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-8515" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":55,"skipped":966,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-projected-qvp4
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  9 21:04:34.002: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qvp4" in namespace "subpath-3411" to be "Succeeded or Failed"

    Sep  9 21:04:34.008: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.189467ms
    Sep  9 21:04:36.014: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 2.011550632s
    Sep  9 21:04:38.017: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 4.014868641s
    Sep  9 21:04:40.023: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 6.020228062s
    Sep  9 21:04:42.028: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 8.025295028s
    Sep  9 21:04:44.033: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 10.030681929s
    Sep  9 21:04:46.039: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 12.036760544s
    Sep  9 21:04:48.046: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 14.043547426s
    Sep  9 21:04:50.052: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 16.04970259s
    Sep  9 21:04:52.057: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 18.054186099s
    Sep  9 21:04:54.062: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Running", Reason="", readiness=true. Elapsed: 20.059598088s
    Sep  9 21:04:56.066: INFO: Pod "pod-subpath-test-projected-qvp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.063865601s
    STEP: Saw pod success
    Sep  9 21:04:56.066: INFO: Pod "pod-subpath-test-projected-qvp4" satisfied condition "Succeeded or Failed"

    Sep  9 21:04:56.070: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-subpath-test-projected-qvp4 container test-container-subpath-projected-qvp4: <nil>
    STEP: delete the pod
    Sep  9 21:04:56.087: INFO: Waiting for pod pod-subpath-test-projected-qvp4 to disappear
    Sep  9 21:04:56.090: INFO: Pod pod-subpath-test-projected-qvp4 no longer exists
    STEP: Deleting pod pod-subpath-test-projected-qvp4
    Sep  9 21:04:56.090: INFO: Deleting pod "pod-subpath-test-projected-qvp4" in namespace "subpath-3411"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:56.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-3411" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":42,"skipped":727,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:04:59.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-9000" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":43,"skipped":740,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:04:59.367: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de" in namespace "downward-api-2407" to be "Succeeded or Failed"

    Sep  9 21:04:59.373: INFO: Pod "downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de": Phase="Pending", Reason="", readiness=false. Elapsed: 5.387368ms
    Sep  9 21:05:01.377: INFO: Pod "downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009508233s
    STEP: Saw pod success
    Sep  9 21:05:01.377: INFO: Pod "downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de" satisfied condition "Succeeded or Failed"

    Sep  9 21:05:01.380: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:05:01.407: INFO: Waiting for pod downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de to disappear
    Sep  9 21:05:01.411: INFO: Pod downwardapi-volume-cdf2314d-9efc-4089-a1b8-ff025691e0de no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:01.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-2407" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":788,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:05:01.450: INFO: >>> kubeConfig: /tmp/kubeconfig
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:01.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-1206" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":45,"skipped":788,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:05:01.564: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-0d8101bc-d663-4987-a05e-d9919aff5abf
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:05:01.628: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578" in namespace "projected-5498" to be "Succeeded or Failed"

    Sep  9 21:05:01.634: INFO: Pod "pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578": Phase="Pending", Reason="", readiness=false. Elapsed: 5.663121ms
    Sep  9 21:05:03.640: INFO: Pod "pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011419358s
    STEP: Saw pod success
    Sep  9 21:05:03.640: INFO: Pod "pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578" satisfied condition "Succeeded or Failed"

    Sep  9 21:05:03.644: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:05:03.664: INFO: Waiting for pod pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578 to disappear
    Sep  9 21:05:03.666: INFO: Pod pod-projected-configmaps-4fb193e7-5f22-49ea-b790-8ff7c6453578 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:03.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5498" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":809,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:05.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-1264" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":47,"skipped":811,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:09.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-6574" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":979,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:11.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-9813" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":57,"skipped":981,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:13.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-5382" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1002,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:19.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1970" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":48,"skipped":884,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
    [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
    STEP: Watching for error events or started pod

    STEP: Waiting for pod completion
    STEP: Checking that the pod succeeded
    STEP: Getting logs from the pod
    STEP: Checking that the sysctl is actually updated
    [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:21.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "sysctl-7350" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":49,"skipped":885,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:23.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-62" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":50,"skipped":890,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:24.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-9986" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":59,"skipped":1004,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:31.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-6336" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":60,"skipped":1019,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 28 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:37.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-554" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":901,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] RuntimeClass
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:37.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "runtimeclass-4709" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":-1,"completed":52,"skipped":907,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 45 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:52.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-4923" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":61,"skipped":1023,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Ingress API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:53.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingress-7388" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":62,"skipped":1035,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:53.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-2844" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":63,"skipped":1042,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:05:53.386: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34" in namespace "projected-1034" to be "Succeeded or Failed"

    Sep  9 21:05:53.390: INFO: Pod "downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.752634ms
    Sep  9 21:05:55.394: INFO: Pod "downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008513394s
    STEP: Saw pod success
    Sep  9 21:05:55.394: INFO: Pod "downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34" satisfied condition "Succeeded or Failed"

    Sep  9 21:05:55.399: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:05:55.424: INFO: Waiting for pod downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34 to disappear
    Sep  9 21:05:55.426: INFO: Pod downwardapi-volume-2b3e3f8c-c7b4-43a1-9289-af17812fda34 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:55.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-1034" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1055,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:05:55.456: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod UID as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  9 21:05:55.513: INFO: Waiting up to 5m0s for pod "downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761" in namespace "downward-api-6958" to be "Succeeded or Failed"

    Sep  9 21:05:55.523: INFO: Pod "downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761": Phase="Pending", Reason="", readiness=false. Elapsed: 10.39986ms
    Sep  9 21:05:57.528: INFO: Pod "downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015716645s
    STEP: Saw pod success
    Sep  9 21:05:57.529: INFO: Pod "downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761" satisfied condition "Succeeded or Failed"

    Sep  9 21:05:57.532: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761 container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 21:05:57.554: INFO: Waiting for pod downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761 to disappear
    Sep  9 21:05:57.557: INFO: Pod downward-api-4e55f3f1-a7d6-4aa8-919c-1845375e6761 no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:05:57.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-6958" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1063,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:06:00.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-7683" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":909,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-node] KubeletManagedEtcHosts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:06:08.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "e2e-kubelet-etc-hosts-2118" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":912,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:06:08.097: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's args
    Sep  9 21:06:08.151: INFO: Waiting up to 5m0s for pod "var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f" in namespace "var-expansion-4034" to be "Succeeded or Failed"

    Sep  9 21:06:08.156: INFO: Pod "var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.268847ms
    Sep  9 21:06:10.163: INFO: Pod "var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012172835s
    STEP: Saw pod success
    Sep  9 21:06:10.163: INFO: Pod "var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f" satisfied condition "Succeeded or Failed"

    Sep  9 21:06:10.168: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 21:06:10.201: INFO: Waiting for pod var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f to disappear
    Sep  9 21:06:10.205: INFO: Pod var-expansion-521f2d87-8b6d-4d47-bad6-d9c4b4cd0d2f no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:06:10.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-4034" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":55,"skipped":944,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 29 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:06:13.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-6756" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":66,"skipped":1066,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
    STEP: Destroying namespace "webhook-334-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":67,"skipped":1069,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:06:17.354: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 21:06:17.407: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1222cdd2-5728-4608-87be-bd53245e8a80" in namespace "security-context-test-4775" to be "Succeeded or Failed"

    Sep  9 21:06:17.411: INFO: Pod "busybox-user-65534-1222cdd2-5728-4608-87be-bd53245e8a80": Phase="Pending", Reason="", readiness=false. Elapsed: 3.981702ms
    Sep  9 21:06:19.417: INFO: Pod "busybox-user-65534-1222cdd2-5728-4608-87be-bd53245e8a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009955232s
    Sep  9 21:06:19.417: INFO: Pod "busybox-user-65534-1222cdd2-5728-4608-87be-bd53245e8a80" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:06:19.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-4775" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1160,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 42 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:06:42.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-35" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1172,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":439,"failed":1,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:01:39.659: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  9 21:05:18.553: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-8315.svc.cluster.local from pod dns-8315/dns-test-3bbb08d6-e867-478a-b558-74923a07a945: the server is currently unable to handle the request (get pods dns-test-3bbb08d6-e867-478a-b558-74923a07a945)
    Sep  9 21:06:43.830: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-8315/dns-test-3bbb08d6-e867-478a-b558-74923a07a945: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8315/pods/dns-test-3bbb08d6-e867-478a-b558-74923a07a945/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00600e2d0, 0xc0031fdd68, 0xc00600e2d0, 0xc0031fdd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc003bd5980, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0909 21:06:43.831619      16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  9 21:06:43.830: Unable to read wheezy_hosts@dns-querier-1 from pod dns-8315/dns-test-3bbb08d6-e867-478a-b558-74923a07a945: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-8315/pods/dns-test-3bbb08d6-e867-478a-b558-74923a07a945/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00600e2d0, 0xc0031fdd68, 0xc00600e2d0, 0xc0031fdd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0031fdd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001996500, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc002ac0c00, 0x77b8c18, 0xc003b97080, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000f19080, 0xc002ac0c00, 0xc001996500, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc003bd5980)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc003bd5980)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc003bd5980, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc00324b740)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc00324b740)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0000ff680, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc001b1c000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0000ff680, 0x12f, 0xc0031fd7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0000ff680, 0x12f, 0xc0031fd890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc0031fdaf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc00600e200, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)
... skipping 86 lines ...
    STEP: Destroying namespace "crd-webhook-4580" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":70,"skipped":1205,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:08:00.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-7136" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":56,"skipped":968,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
    • [SLOW TEST:316.098 seconds]
    [sig-apps] CronJob
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":65,"skipped":1138,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:08:00.397: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-map-165814bb-c167-485e-8f93-84b8d0d9cfa9
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:08:00.445: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50" in namespace "projected-326" to be "Succeeded or Failed"

    Sep  9 21:08:00.450: INFO: Pod "pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.852715ms
    Sep  9 21:08:02.455: INFO: Pod "pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008973639s
    STEP: Saw pod success
    Sep  9 21:08:02.455: INFO: Pod "pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50" satisfied condition "Succeeded or Failed"

    Sep  9 21:08:02.460: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:08:02.491: INFO: Waiting for pod pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50 to disappear
    Sep  9 21:08:02.495: INFO: Pod pod-projected-configmaps-1f6b833a-fc1d-4872-9514-a472ea43df50 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:08:02.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-326" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":981,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:08:02.558: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename svcaccounts
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 21:08:02.617: INFO: created pod
    Sep  9 21:08:02.617: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8383" to be "Succeeded or Failed"

    Sep  9 21:08:02.621: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378149ms
    Sep  9 21:08:04.627: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010171177s
    STEP: Saw pod success
    Sep  9 21:08:04.627: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"

    Sep  9 21:08:34.628: INFO: polling logs
    Sep  9 21:08:34.645: INFO: Pod logs: 
    2022/09/09 21:08:03 OK: Got token
    2022/09/09 21:08:03 validating with in-cluster discovery
    2022/09/09 21:08:03 OK: got issuer https://kubernetes.default.svc.cluster.local
    2022/09/09 21:08:03 Full, not-validated claims: 
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:08:34.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-8383" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":58,"skipped":1006,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:08:53.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-2796" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":59,"skipped":1021,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:08:53.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b" in namespace "projected-3199" to be "Succeeded or Failed"

    Sep  9 21:08:53.433: INFO: Pod "downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058459ms
    Sep  9 21:08:55.438: INFO: Pod "downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009064403s
    STEP: Saw pod success
    Sep  9 21:08:55.438: INFO: Pod "downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b" satisfied condition "Succeeded or Failed"

    Sep  9 21:08:55.442: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:08:55.467: INFO: Waiting for pod downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b to disappear
    Sep  9 21:08:55.471: INFO: Pod downwardapi-volume-4f7440a6-665f-4266-8f7d-a9a37d23b64b no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:08:55.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-3199" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1026,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    • [SLOW TEST:150.578 seconds]
    [sig-node] Probing container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
      should have monotonically increasing restart count [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1210,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:21.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-6607" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":72,"skipped":1243,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:09:21.661: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name projected-configmap-test-volume-a946059e-19be-46a1-9519-0d64d983a9a1
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:09:21.711: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17" in namespace "projected-7317" to be "Succeeded or Failed"

    Sep  9 21:09:21.715: INFO: Pod "pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151021ms
    Sep  9 21:09:23.720: INFO: Pod "pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009473728s
    STEP: Saw pod success
    Sep  9 21:09:23.720: INFO: Pod "pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17" satisfied condition "Succeeded or Failed"

    Sep  9 21:09:23.725: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:09:23.752: INFO: Waiting for pod pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17 to disappear
    Sep  9 21:09:23.756: INFO: Pod pod-projected-configmaps-b52cd272-ca7d-46a3-b878-f2cf0cb1de17 no longer exists
    [AfterEach] [sig-storage] Projected configMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:23.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7317" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1247,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:30.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-187" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":74,"skipped":1259,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide container's cpu limit [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:09:30.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23" in namespace "downward-api-1275" to be "Succeeded or Failed"

    Sep  9 21:09:30.187: INFO: Pod "downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23": Phase="Pending", Reason="", readiness=false. Elapsed: 3.792962ms
    Sep  9 21:09:32.193: INFO: Pod "downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010707963s
    STEP: Saw pod success
    Sep  9 21:09:32.193: INFO: Pod "downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23" satisfied condition "Succeeded or Failed"

    Sep  9 21:09:32.197: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:09:32.218: INFO: Waiting for pod downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23 to disappear
    Sep  9 21:09:32.221: INFO: Pod downwardapi-volume-e818b8f7-e4ad-4cec-843f-a6438def6d23 no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:32.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-1275" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":75,"skipped":1291,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:36.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-5646" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1301,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-node] Lease
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:36.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "lease-test-9190" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":77,"skipped":1305,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:09:36.463: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir volume type on tmpfs
    Sep  9 21:09:36.504: INFO: Waiting up to 5m0s for pod "pod-86cd072b-3542-4a2a-bfca-b9e8db403b70" in namespace "emptydir-941" to be "Succeeded or Failed"

    Sep  9 21:09:36.507: INFO: Pod "pod-86cd072b-3542-4a2a-bfca-b9e8db403b70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158648ms
    Sep  9 21:09:38.514: INFO: Pod "pod-86cd072b-3542-4a2a-bfca-b9e8db403b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009532371s
    STEP: Saw pod success
    Sep  9 21:09:38.514: INFO: Pod "pod-86cd072b-3542-4a2a-bfca-b9e8db403b70" satisfied condition "Succeeded or Failed"

    Sep  9 21:09:38.518: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-86cd072b-3542-4a2a-bfca-b9e8db403b70 container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:09:38.546: INFO: Waiting for pod pod-86cd072b-3542-4a2a-bfca-b9e8db403b70 to disappear
    Sep  9 21:09:38.550: INFO: Pod pod-86cd072b-3542-4a2a-bfca-b9e8db403b70 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:38.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-941" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1329,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-6635" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":79,"skipped":1357,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 24 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:55.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-5022" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":80,"skipped":1359,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] PodTemplates
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:55.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "podtemplate-2893" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":81,"skipped":1382,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicationController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:09:58.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replication-controller-8973" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":82,"skipped":1427,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] CronJob
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 15 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:01.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "cronjob-999" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":61,"skipped":1038,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:03.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-9776" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1044,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 3 lines ...
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
    [It] should serve a basic endpoint from pods  [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating service endpoint-test2 in namespace services-6978
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6978 to expose endpoints map[]
    Sep  9 21:10:03.769: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found

    Sep  9 21:10:04.779: INFO: successfully validated that service endpoint-test2 in namespace services-6978 exposes endpoints map[]
    STEP: Creating pod pod1 in namespace services-6978
    Sep  9 21:10:04.798: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
    Sep  9 21:10:06.803: INFO: The status of Pod pod1 is Running (Ready = true)
    STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6978 to expose endpoints map[pod1:[80]]
    Sep  9 21:10:06.822: INFO: successfully validated that service endpoint-test2 in namespace services-6978 exposes endpoints map[pod1:[80]]
... skipping 14 lines ...
    STEP: Destroying namespace "services-6978" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":63,"skipped":1058,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Runtime
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:11.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-runtime-2905" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1074,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 18 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:12.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4576" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":65,"skipped":1075,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:14.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-4669" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1105,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 4 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
    STEP: Setting up data
    [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating pod pod-subpath-test-downwardapi-4trd
    STEP: Creating a pod to test atomic-volume-subpath
    Sep  9 21:09:59.025: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4trd" in namespace "subpath-9171" to be "Succeeded or Failed"

    Sep  9 21:09:59.029: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.741451ms
    Sep  9 21:10:01.034: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 2.009362526s
    Sep  9 21:10:03.040: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 4.015557503s
    Sep  9 21:10:05.045: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 6.019857978s
    Sep  9 21:10:07.050: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 8.024953163s
    Sep  9 21:10:09.061: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 10.036260427s
    Sep  9 21:10:11.067: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 12.04278076s
    Sep  9 21:10:13.073: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 14.048701305s
    Sep  9 21:10:15.079: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 16.054582511s
    Sep  9 21:10:17.088: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 18.063358086s
    Sep  9 21:10:19.095: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Running", Reason="", readiness=true. Elapsed: 20.070472s
    Sep  9 21:10:21.100: INFO: Pod "pod-subpath-test-downwardapi-4trd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.075464075s
    STEP: Saw pod success
    Sep  9 21:10:21.100: INFO: Pod "pod-subpath-test-downwardapi-4trd" satisfied condition "Succeeded or Failed"

    Sep  9 21:10:21.104: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-subpath-test-downwardapi-4trd container test-container-subpath-downwardapi-4trd: <nil>
    STEP: delete the pod
    Sep  9 21:10:21.123: INFO: Waiting for pod pod-subpath-test-downwardapi-4trd to disappear
    Sep  9 21:10:21.128: INFO: Pod pod-subpath-test-downwardapi-4trd no longer exists
    STEP: Deleting pod pod-subpath-test-downwardapi-4trd
    Sep  9 21:10:21.128: INFO: Deleting pod "pod-subpath-test-downwardapi-4trd" in namespace "subpath-9171"
    [AfterEach] [sig-storage] Subpath
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "subpath-9171" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":83,"skipped":1447,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:10:21.148: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-1602a341-7fb5-4106-9a90-72d8995f5591
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:10:21.195: INFO: Waiting up to 5m0s for pod "pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4" in namespace "configmap-4693" to be "Succeeded or Failed"

    Sep  9 21:10:21.200: INFO: Pod "pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641816ms
    Sep  9 21:10:23.205: INFO: Pod "pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009926478s
    STEP: Saw pod success
    Sep  9 21:10:23.205: INFO: Pod "pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4" satisfied condition "Succeeded or Failed"

    Sep  9 21:10:23.209: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:10:23.228: INFO: Waiting for pod pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4 to disappear
    Sep  9 21:10:23.231: INFO: Pod pod-configmaps-038a9f2e-3c60-48e3-9f6a-63b1042129f4 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:23.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-4693" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":84,"skipped":1447,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "services-5629" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":67,"skipped":1106,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Docker Containers
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 6 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:35.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "containers-6558" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1122,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
    STEP: Destroying namespace "webhook-790-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":69,"skipped":1128,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide container's memory request [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:10:39.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127" in namespace "projected-5766" to be "Succeeded or Failed"

    Sep  9 21:10:39.441: INFO: Pod "downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485921ms
    Sep  9 21:10:41.446: INFO: Pod "downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008334509s
    STEP: Saw pod success
    Sep  9 21:10:41.446: INFO: Pod "downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127" satisfied condition "Succeeded or Failed"

    Sep  9 21:10:41.449: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:10:41.470: INFO: Waiting for pod downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127 to disappear
    Sep  9 21:10:41.474: INFO: Pod downwardapi-volume-a591cc53-2d04-4df1-b425-c510e51ae127 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:41.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-5766" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1133,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:10:41.487: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should provide podname only [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:10:41.529: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34" in namespace "projected-7671" to be "Succeeded or Failed"

    Sep  9 21:10:41.532: INFO: Pod "downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45545ms
    Sep  9 21:10:43.538: INFO: Pod "downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009347268s
    STEP: Saw pod success
    Sep  9 21:10:43.538: INFO: Pod "downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34" satisfied condition "Succeeded or Failed"

    Sep  9 21:10:43.542: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:10:43.565: INFO: Waiting for pod downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34 to disappear
    Sep  9 21:10:43.571: INFO: Pod downwardapi-volume-0f3bfa1e-ec64-4d2e-b0fd-84977884ac34 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:10:43.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-7671" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1133,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
    STEP: Destroying namespace "services-7070" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":72,"skipped":1149,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":439,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:06:43.862: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename dns
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
    
    STEP: creating a pod to probe /etc/hosts
    STEP: submitting the pod to kubernetes
    STEP: retrieving the pod
    STEP: looking for the results for each expected name from probers
    Sep  9 21:10:19.609: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-6339.svc.cluster.local from pod dns-6339/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385: the server is currently unable to handle the request (get pods dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385)
    Sep  9 21:11:45.939: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6339/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6339/pods/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded

    
    Full Stack Trace
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00600e4c8, 0xc0031fdd68, 0xc00600e4c8, 0xc0031fdd68)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f
... skipping 13 lines ...
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
    testing.tRunner(0xc003bd5980, 0x70fea78)
    	/usr/local/go/src/testing/testing.go:1203 +0xe5
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1248 +0x2b3
    E0909 21:11:45.940199      16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Sep  9 21:11:45.939: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6339/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-6339/pods/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00600e4c8, 0xc0031fdd68, 0xc00600e4c8, 0xc0031fdd68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0031fdd68, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002750c80, 0x8, 0x8, 0x6ee63d3, 0x7, 0xc001618400, 0x77b8c18, 0xc0025c0840, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x158\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000f19080, 0xc001618400, 0xc002750c80, 0x8, 0x8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x62a\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc003bd5980)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc003bd5980)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc003bd5980, 0x70fea78)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} (
    Your test failed.

    Ginkgo panics to prevent subsequent assertions from running.
    Normally Ginkgo rescues this panic so you shouldn't see it.
    
    But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
    To circumvent this, you should call
    
... skipping 5 lines ...
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6a84100, 0xc003a86100)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
    panic(0x6a84100, 0xc003a86100)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0000ff680, 0x12f, 0x86a5e60, 0x7d, 0xd3, 0xc00395e000, 0x7fc)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
    panic(0x61dbcc0, 0x75da840)
    	/usr/local/go/src/runtime/panic.go:965 +0x1b9
    k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0000ff680, 0x12f, 0xc0031fd7a8, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8
    k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0000ff680, 0x12f, 0xc0031fd890, 0x1, 0x1)

    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
    k8s.io/kubernetes/test/e2e/framework.Failf(0x6f89b47, 0x24, 0xc0031fdaf0, 0x4, 0x4)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
    k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0xc00600e400, 0x0, 0x0)
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
    k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0031fdd68, 0x29a3500, 0x0, 0x0)
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 21:11:45.939: Unable to read wheezy_hosts@dns-querier-1 from pod dns-6339/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-6339/pods/dns-test-31a1231b-a8f2-466b-b104-ea032e5b7385/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211
    ------------------------------
    {"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":439,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:11:46.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-6108" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":445,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 19 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:11:46.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-watch-7058" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":73,"skipped":1175,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:11:46.974: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0644 on node default medium
    Sep  9 21:11:47.019: INFO: Waiting up to 5m0s for pod "pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8" in namespace "emptydir-8229" to be "Succeeded or Failed"

    Sep  9 21:11:47.024: INFO: Pod "pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.220301ms
    Sep  9 21:11:49.028: INFO: Pod "pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009477618s
    STEP: Saw pod success
    Sep  9 21:11:49.028: INFO: Pod "pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8" satisfied condition "Succeeded or Failed"

    Sep  9 21:11:49.032: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8 container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:11:49.062: INFO: Waiting for pod pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8 to disappear
    Sep  9 21:11:49.065: INFO: Pod pod-b2bb69bc-92b5-4ce5-b0f2-b8569e88dcd8 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:11:49.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-8229" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":74,"skipped":1194,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
    STEP: Deploying the webhook pod
    STEP: Wait for the deployment to be ready
    Sep  9 21:11:49.741: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
    STEP: Deploying the webhook service
    STEP: Verifying the service has paired with the endpoint
    Sep  9 21:11:52.775: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should unconditionally reject operations on fail closed webhook [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API

    STEP: create a namespace for the webhook
    STEP: create a configmap should be unconditionally rejected by the webhook
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:11:52.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-2511" for this suite.
    STEP: Destroying namespace "webhook-2511-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":75,"skipped":1203,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] version v1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:11:55.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "proxy-6984" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":76,"skipped":1245,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 101 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:11:55.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "statefulset-5117" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":85,"skipped":1454,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Aggregator
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:05.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "aggregator-8418" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":77,"skipped":1251,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:17.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-4071" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":78,"skipped":1258,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:17.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-6741" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1464,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] IngressClass API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:17.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "ingressclass-5909" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":87,"skipped":1474,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] ServiceAccounts
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:18.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "svcaccounts-5711" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":88,"skipped":1516,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 27 lines ...
    STEP: Destroying namespace "webhook-3083-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":79,"skipped":1337,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] Garbage collector
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:23.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "gc-2641" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":89,"skipped":1542,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:25.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-6286" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":80,"skipped":1357,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:25.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-6734" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":81,"skipped":1370,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:27.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-7902" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":90,"skipped":1571,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:28.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "certificates-5284" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":91,"skipped":1590,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:12:28.548: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail to create secret due to empty secret key [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name secret-emptykey-test-6c6dc9a9-7aa3-4f4a-b0a5-a98b9744ee07
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:28.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-5735" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":92,"skipped":1630,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Job
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:33.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "job-9487" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":32,"skipped":453,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:12:33.262: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename downward-api
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward api env vars
    Sep  9 21:12:33.305: INFO: Waiting up to 5m0s for pod "downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b" in namespace "downward-api-8620" to be "Succeeded or Failed"

    Sep  9 21:12:33.310: INFO: Pod "downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.76283ms
    Sep  9 21:12:35.315: INFO: Pod "downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009637427s
    STEP: Saw pod success
    Sep  9 21:12:35.315: INFO: Pod "downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b" satisfied condition "Succeeded or Failed"

    Sep  9 21:12:35.319: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 21:12:35.345: INFO: Waiting for pod downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b to disappear
    Sep  9 21:12:35.348: INFO: Pod downward-api-e9dfbac5-cf7b-4fc0-9602-2983c0dbcd5b no longer exists
    [AfterEach] [sig-node] Downward API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:35.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-8620" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":460,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:45.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-6542" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":93,"skipped":1655,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 13 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:45.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-5959" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":94,"skipped":1664,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Networking
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 47 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:12:58.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pod-network-test-2218" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":527,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:12:45.929: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 21:12:47.990: INFO: Deleting pod "var-expansion-163070e7-91a8-4d5c-a32f-2389cda6e306" in namespace "var-expansion-5811"
    Sep  9 21:12:47.997: INFO: Wait up to 5m0s for pod "var-expansion-163070e7-91a8-4d5c-a32f-2389cda6e306" to be fully deleted
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:00.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-5811" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":95,"skipped":1687,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:12:58.063: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename secrets
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable via the environment [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: creating secret secrets-4251/secret-test-265f47e9-aed9-4411-977a-a1b433976b5d
    STEP: Creating a pod to test consume secrets
    Sep  9 21:12:58.114: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f" in namespace "secrets-4251" to be "Succeeded or Failed"

    Sep  9 21:12:58.118: INFO: Pod "pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.853915ms
    Sep  9 21:13:00.122: INFO: Pod "pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007967135s
    STEP: Saw pod success
    Sep  9 21:13:00.122: INFO: Pod "pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f" satisfied condition "Succeeded or Failed"

    Sep  9 21:13:00.126: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f container env-test: <nil>
    STEP: delete the pod
    Sep  9 21:13:00.154: INFO: Waiting for pod pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f to disappear
    Sep  9 21:13:00.159: INFO: Pod pod-configmaps-5b4f8dda-cf1c-4b9d-8f10-fbd6645d194f no longer exists
    [AfterEach] [sig-node] Secrets
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:00.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "secrets-4251" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":536,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 48 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:04.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-7640" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":36,"skipped":543,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] Deployment
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 26 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:09.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "deployment-8743" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":37,"skipped":557,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:13:09.515: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename var-expansion
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test substitution in container's command
    Sep  9 21:13:09.566: INFO: Waiting up to 5m0s for pod "var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9" in namespace "var-expansion-6149" to be "Succeeded or Failed"

    Sep  9 21:13:09.570: INFO: Pod "var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809574ms
    Sep  9 21:13:11.575: INFO: Pod "var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008475854s
    STEP: Saw pod success
    Sep  9 21:13:11.575: INFO: Pod "var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9" satisfied condition "Succeeded or Failed"

    Sep  9 21:13:11.578: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9 container dapi-container: <nil>
    STEP: delete the pod
    Sep  9 21:13:11.596: INFO: Waiting for pod var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9 to disappear
    Sep  9 21:13:11.598: INFO: Pod var-expansion-081530ca-1f29-4208-a771-6cd20799dfc9 no longer exists
    [AfterEach] [sig-node] Variable Expansion
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:11.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "var-expansion-6149" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":572,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Kubelet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 10 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:13.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubelet-test-5085" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":577,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] ResourceQuota
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:16.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "resourcequota-1654" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":96,"skipped":1708,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-instrumentation] Events API
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 21 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:16.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "events-8461" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":97,"skipped":1726,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:13:16.425: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 21:13:16.476: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f0536e38-1d6a-47d0-9853-1acb89b70bb8" in namespace "security-context-test-4394" to be "Succeeded or Failed"

    Sep  9 21:13:16.480: INFO: Pod "busybox-readonly-false-f0536e38-1d6a-47d0-9853-1acb89b70bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.821632ms
    Sep  9 21:13:18.486: INFO: Pod "busybox-readonly-false-f0536e38-1d6a-47d0-9853-1acb89b70bb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009513657s
    Sep  9 21:13:18.486: INFO: Pod "busybox-readonly-false-f0536e38-1d6a-47d0-9853-1acb89b70bb8" satisfied condition "Succeeded or Failed"

    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:18.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-4394" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":98,"skipped":1748,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-apps] ReplicaSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:18.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "replicaset-6487" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":40,"skipped":590,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 16 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7619-crds.webhook.example.com via the AdmissionRegistration API
    Sep  9 21:12:39.631: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:12:49.744: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:12:59.846: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:13:09.945: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:13:19.957: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:13:19.957: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 21:13:19.957: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
... skipping 6 lines ...
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-2386ef99-644c-4132-803b-4bcd8283feaa
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:13:18.952: INFO: Waiting up to 5m0s for pod "pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc" in namespace "configmap-2148" to be "Succeeded or Failed"

    Sep  9 21:13:18.956: INFO: Pod "pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.530074ms
    Sep  9 21:13:20.962: INFO: Pod "pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010292595s
    STEP: Saw pod success
    Sep  9 21:13:20.962: INFO: Pod "pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc" satisfied condition "Succeeded or Failed"

    Sep  9 21:13:20.966: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:13:20.997: INFO: Waiting for pod pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc to disappear
    Sep  9 21:13:21.003: INFO: Pod pod-configmaps-714b7018-e27a-4539-92ed-b90755d365bc no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:21.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2148" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":607,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
    [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:13:21.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa" in namespace "downward-api-7251" to be "Succeeded or Failed"

    Sep  9 21:13:21.193: INFO: Pod "downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115584ms
    Sep  9 21:13:23.200: INFO: Pod "downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013659185s
    STEP: Saw pod success
    Sep  9 21:13:23.200: INFO: Pod "downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa" satisfied condition "Succeeded or Failed"

    Sep  9 21:13:23.204: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:13:23.228: INFO: Waiting for pod downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa to disappear
    Sep  9 21:13:23.240: INFO: Pod downwardapi-volume-1a1d37f1-5ab1-42b2-b6db-b1068b59acaa no longer exists
    [AfterEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:23.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-7251" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":645,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:25.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-4362" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":99,"skipped":1786,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:13:25.253: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename emptydir
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test emptydir 0777 on node default medium
    Sep  9 21:13:25.301: INFO: Waiting up to 5m0s for pod "pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7" in namespace "emptydir-4032" to be "Succeeded or Failed"

    Sep  9 21:13:25.306: INFO: Pod "pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.070471ms
    Sep  9 21:13:27.310: INFO: Pod "pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009066153s
    STEP: Saw pod success
    Sep  9 21:13:27.310: INFO: Pod "pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7" satisfied condition "Succeeded or Failed"

    Sep  9 21:13:27.313: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7 pod pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7 container test-container: <nil>
    STEP: delete the pod
    Sep  9 21:13:27.331: INFO: Waiting for pod pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7 to disappear
    Sep  9 21:13:27.334: INFO: Pod pod-f9a1dc6e-c529-496b-9d44-faf63f906cc7 no longer exists
    [AfterEach] [sig-storage] EmptyDir volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:27.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-4032" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":100,"skipped":1796,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-network] DNS
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:27.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "dns-5333" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":43,"skipped":659,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 39 lines ...
    STEP: Destroying namespace "services-2251" for this suite.
    [AfterEach] [sig-network] Services
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":44,"skipped":683,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:43.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-6489" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":101,"skipped":1802,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:13:43.493: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename configmap
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating configMap with name configmap-test-volume-cc10820c-5207-46d5-99ad-60dd280533e2
    STEP: Creating a pod to test consume configMaps
    Sep  9 21:13:43.566: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3" in namespace "configmap-2112" to be "Succeeded or Failed"

    Sep  9 21:13:43.570: INFO: Pod "pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.99914ms
    Sep  9 21:13:45.575: INFO: Pod "pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00871202s
    STEP: Saw pod success
    Sep  9 21:13:45.575: INFO: Pod "pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3" satisfied condition "Succeeded or Failed"

    Sep  9 21:13:45.579: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjvth pod pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3 container agnhost-container: <nil>
    STEP: delete the pod
    Sep  9 21:13:45.595: INFO: Waiting for pod pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3 to disappear
    Sep  9 21:13:45.598: INFO: Pod pod-configmaps-3a860087-7905-41fd-9133-1ee45804a5b3 no longer exists
    [AfterEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:45.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-2112" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":102,"skipped":1820,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    STEP: Destroying namespace "webhook-8223-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":45,"skipped":691,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 82 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:53.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-8836" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":103,"skipped":1830,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSSSS
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":705,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    [BeforeEach] [sig-node] Pods
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:13:52.541: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename pods
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:54.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "pods-5872" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":705,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-network] EndpointSlice
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 8 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:54.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "endpointslice-7505" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":48,"skipped":726,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] EmptyDir wrapper volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:13:55.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "emptydir-wrapper-7813" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":104,"skipped":1835,"failed":1,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}

    
    SSS
    ------------------------------
    [BeforeEach] [sig-apps] DisruptionController
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:01.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "disruption-5948" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":49,"skipped":758,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSS
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":81,"skipped":1394,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:13:20.567: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
    STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2120-crds.webhook.example.com via the AdmissionRegistration API
    Sep  9 21:13:34.670: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:13:44.782: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:13:54.893: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:14:04.983: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:14:14.996: INFO: Waiting for webhook configuration to be ready...
    Sep  9 21:14:14.996: FAIL: waiting for webhook configuration to be ready

    Unexpected error:

        <*errors.errorString | 0xc000244290>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred
    
... skipping 23 lines ...
    [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
      should mutate custom resource [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 21:14:14.996: waiting for webhook configuration to be ready
      Unexpected error:

          <*errors.errorString | 0xc000244290>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred
    
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
    ------------------------------
    {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":81,"skipped":1394,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:14:15.587: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename webhook
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
    STEP: Destroying namespace "webhook-6426-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":82,"skipped":1394,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSS
    ------------------------------
    [BeforeEach] [sig-node] Probing container
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 14 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:23.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-probe-1227" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":763,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:14:22.586: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename projected
    STEP: Waiting for a default service account to be provisioned in namespace
    [It] should be consumable from pods in volume [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating projection with secret that has name projected-secret-test-f69a8df0-4429-4de9-aba2-40fac1cce1e0
    STEP: Creating a pod to test consume secrets
    Sep  9 21:14:22.672: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf" in namespace "projected-6120" to be "Succeeded or Failed"

    Sep  9 21:14:22.678: INFO: Pod "pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.609063ms
    Sep  9 21:14:24.683: INFO: Pod "pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010874888s
    STEP: Saw pod success
    Sep  9 21:14:24.683: INFO: Pod "pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf" satisfied condition "Succeeded or Failed"

    Sep  9 21:14:24.687: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-advsih pod pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf container projected-secret-volume-test: <nil>
    STEP: delete the pod
    Sep  9 21:14:24.707: INFO: Waiting for pod pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf to disappear
    Sep  9 21:14:24.711: INFO: Pod pod-projected-secrets-f535c50b-32a0-4ee2-ad16-192bfb2eabcf no longer exists
    [AfterEach] [sig-storage] Projected secret
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:24.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-6120" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":83,"skipped":1399,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:30.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-619" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":51,"skipped":771,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:14:30.977: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename security-context-test
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
    [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    Sep  9 21:14:31.032: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-2c3cf2ec-59ee-4e70-bd6a-98e25f644e48" in namespace "security-context-test-8102" to be "Succeeded or Failed"

    Sep  9 21:14:31.035: INFO: Pod "busybox-privileged-false-2c3cf2ec-59ee-4e70-bd6a-98e25f644e48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.732868ms
    Sep  9 21:14:33.041: INFO: Pod "busybox-privileged-false-2c3cf2ec-59ee-4e70-bd6a-98e25f644e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008984728s
    Sep  9 21:14:33.041: INFO: Pod "busybox-privileged-false-2c3cf2ec-59ee-4e70-bd6a-98e25f644e48" satisfied condition "Succeeded or Failed"

    Sep  9 21:14:33.047: INFO: Got logs for pod "busybox-privileged-false-2c3cf2ec-59ee-4e70-bd6a-98e25f644e48": "ip: RTNETLINK answers: Operation not permitted\n"
    [AfterEach] [sig-node] Security Context
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:33.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "security-context-test-8102" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":800,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
    STEP: Destroying namespace "crd-webhook-3806" for this suite.
    [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":84,"skipped":1420,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 33 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:40.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-8679" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":85,"skipped":1424,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Downward API volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:47.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "downward-api-484" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":86,"skipped":1430,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
    Sep  9 21:14:36.961: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
    [It] should honor timeout [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Setting timeout (1s) shorter than webhook latency (5s)
    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
    STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is longer than webhook latency

    STEP: Registering slow webhook via the AdmissionRegistration API
    STEP: Having no error when timeout is empty (defaulted to 10s in v1)

    STEP: Registering slow webhook via the AdmissionRegistration API
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:49.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "webhook-1834" for this suite.
    STEP: Destroying namespace "webhook-1834-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":53,"skipped":801,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 7 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:50.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "custom-resource-definition-6194" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":87,"skipped":1436,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    SSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 23 lines ...
    STEP: Destroying namespace "webhook-7011-markers" for this suite.
    [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":88,"skipped":1448,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    
    S
    ------------------------------
    [BeforeEach] [sig-node] Container Lifecycle Hook
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 22 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:14:57.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "container-lifecycle-hook-3734" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":827,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSS
    ------------------------------
    [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:15:12.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "crd-publish-openapi-3655" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":55,"skipped":833,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 25 lines ...
    Sep  9 21:08:08.311: INFO: stderr: ""
    Sep  9 21:08:08.311: INFO: stdout: "true"
    Sep  9 21:08:08.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6623 get pods update-demo-nautilus-rznwh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep  9 21:08:08.424: INFO: stderr: ""
    Sep  9 21:08:08.424: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep  9 21:08:08.424: INFO: validating pod update-demo-nautilus-rznwh
    Sep  9 21:11:41.530: INFO: update-demo-nautilus-rznwh is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-rznwh)

    Sep  9 21:11:46.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6623 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
    Sep  9 21:11:46.654: INFO: stderr: ""
    Sep  9 21:11:46.654: INFO: stdout: "update-demo-nautilus-rznwh update-demo-nautilus-tfxk8 "
    Sep  9 21:11:46.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6623 get pods update-demo-nautilus-rznwh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
    Sep  9 21:11:46.779: INFO: stderr: ""
    Sep  9 21:11:46.779: INFO: stdout: "true"
    Sep  9 21:11:46.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6623 get pods update-demo-nautilus-rznwh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
    Sep  9 21:11:46.901: INFO: stderr: ""
    Sep  9 21:11:46.901: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
    Sep  9 21:11:46.901: INFO: validating pod update-demo-nautilus-rznwh
    Sep  9 21:15:20.669: INFO: update-demo-nautilus-rznwh is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-rznwh)

    Sep  9 21:15:25.670: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    
    Full Stack Trace
    k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
    	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 +0x2ad
    k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003acf980)
    	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
... skipping 57 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:15:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5194" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":56,"skipped":854,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] ConfigMap
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:15:32.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "configmap-7803" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":928,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSS
    ------------------------------
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 2 lines ...
    STEP: Waiting for a default service account to be provisioned in namespace
    [BeforeEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
    [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    STEP: Creating a pod to test downward API volume plugin
    Sep  9 21:15:32.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99" in namespace "projected-9396" to be "Succeeded or Failed"

    Sep  9 21:15:32.946: INFO: Pod "downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966837ms
    Sep  9 21:15:34.952: INFO: Pod "downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009333649s
    STEP: Saw pod success
    Sep  9 21:15:34.952: INFO: Pod "downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99" satisfied condition "Succeeded or Failed"

    Sep  9 21:15:34.955: INFO: Trying to get logs from node k8s-upgrade-and-conformance-b2vx3j-worker-6rlx5y pod downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99 container client-container: <nil>
    STEP: delete the pod
    Sep  9 21:15:34.982: INFO: Waiting for pod downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99 to disappear
    Sep  9 21:15:34.987: INFO: Pod downwardapi-volume-fc6fc318-c90a-4b36-808e-dfbdefdd5a99 no longer exists
    [AfterEach] [sig-storage] Projected downwardAPI
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:15:34.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "projected-9396" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":936,"failed":3,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}

    
    SSSSSSSSSSSSSSSSSSSSS
    ------------------------------
    {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":65,"skipped":1195,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    [BeforeEach] [sig-cli] Kubectl client
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
    Sep  9 21:15:26.073: INFO: >>> kubeConfig: /tmp/kubeconfig
    STEP: Building a namespace api object, basename kubectl
    STEP: Waiting for a default service account to be provisioned in namespace
... skipping 143 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
    Sep  9 21:15:51.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
    STEP: Destroying namespace "kubectl-5179" for this suite.
    
    •
    ------------------------------
    {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":66,"skipped":1195,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

    
    SSSSSSSSSSSSSS
    ------------------------------
    Sep  9 21:15:51.345: INFO: Running AfterSuite actions on all nodes
    
    
... skipping 12 lines ...
    STEP: creating replication controller affinity-nodeport-transition in namespace services-3708
    I0909 21:13:55.709703      22 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3708, replica count: 3
    I0909 21:13:58.760510      22 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
    Sep  9 21:13:58.775: INFO: Creating new exec pod
    Sep  9 21:14:01.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:04.088: INFO: rc: 1
    Sep  9 21:14:04.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:05.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:07.303: INFO: rc: 1
    Sep  9 21:14:07.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:08.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:10.318: INFO: rc: 1
    Sep  9 21:14:10.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:11.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:13.303: INFO: rc: 1
    Sep  9 21:14:13.303: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:14.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:16.271: INFO: rc: 1
    Sep  9 21:14:16.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + + ncecho -v hostName -t
     -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:17.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:19.287: INFO: rc: 1
    Sep  9 21:14:19.287: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:20.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:22.290: INFO: rc: 1
    Sep  9 21:14:22.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:23.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:25.299: INFO: rc: 1
    Sep  9 21:14:25.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:26.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:28.299: INFO: rc: 1
    Sep  9 21:14:28.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName+ 
    nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:29.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:31.268: INFO: rc: 1
    Sep  9 21:14:31.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:32.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:34.290: INFO: rc: 1
    Sep  9 21:14:34.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:35.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:37.306: INFO: rc: 1
    Sep  9 21:14:37.306: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:38.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:40.307: INFO: rc: 1
    Sep  9 21:14:40.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:41.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:43.316: INFO: rc: 1
    Sep  9 21:14:43.316: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:44.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:46.286: INFO: rc: 1
    Sep  9 21:14:46.286: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:47.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:49.301: INFO: rc: 1
    Sep  9 21:14:49.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:50.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:52.276: INFO: rc: 1
    Sep  9 21:14:52.277: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:53.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:55.302: INFO: rc: 1
    Sep  9 21:14:55.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-nodeport-transition 80
    + echo hostName
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:56.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:14:58.284: INFO: rc: 1
    Sep  9 21:14:58.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:14:59.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:01.363: INFO: rc: 1
    Sep  9 21:15:01.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + + nc -v -techo -w hostName 2
     affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:02.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:04.296: INFO: rc: 1
    Sep  9 21:15:04.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:05.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:07.307: INFO: rc: 1
    Sep  9 21:15:07.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:08.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:10.319: INFO: rc: 1
    Sep  9 21:15:10.319: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:11.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:13.309: INFO: rc: 1
    Sep  9 21:15:13.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:14.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:16.274: INFO: rc: 1
    Sep  9 21:15:16.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:17.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:19.290: INFO: rc: 1
    Sep  9 21:15:19.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:20.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:22.291: INFO: rc: 1
    Sep  9 21:15:22.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:23.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:25.287: INFO: rc: 1
    Sep  9 21:15:25.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:26.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:28.321: INFO: rc: 1
    Sep  9 21:15:28.321: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:29.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:31.271: INFO: rc: 1
    Sep  9 21:15:31.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:32.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:34.282: INFO: rc: 1
    Sep  9 21:15:34.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:35.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:37.419: INFO: rc: 1
    Sep  9 21:15:37.420: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + nc -v -t -w 2 affinity-nodeport-transition 80
    + echo hostName
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:38.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:40.336: INFO: rc: 1
    Sep  9 21:15:40.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:41.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:43.307: INFO: rc: 1
    Sep  9 21:15:43.307: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:44.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:46.280: INFO: rc: 1
    Sep  9 21:15:46.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:47.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:49.289: INFO: rc: 1
    Sep  9 21:15:49.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + + echonc hostName
     -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:50.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:52.285: INFO: rc: 1
    Sep  9 21:15:52.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + + echonc hostName -v
     -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:53.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:55.297: INFO: rc: 1
    Sep  9 21:15:55.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:56.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:15:58.299: INFO: rc: 1
    Sep  9 21:15:58.299: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:15:59.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:16:01.322: INFO: rc: 1
    Sep  9 21:16:01.322: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + + nc -v -t -w 2 affinity-nodeport-transition 80echo
     hostName
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:16:02.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:16:04.300: INFO: rc: 1
    Sep  9 21:16:04.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:16:04.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
    Sep  9 21:16:06.490: INFO: rc: 1
    Sep  9 21:16:06.490: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3708 exec execpod-affinityrq767 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:

    Command stdout:
    
    stderr:
    + echo hostName
    + nc -v -t -w 2 affinity-nodeport-transition 80
    nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
    command terminated with exit code 1
    
    error:

    exit status 1
    Retrying...
    Sep  9 21:16:06.490: FAIL: Unexpected error:

        <*errors.errorString | 0xc003f74370>: {
            s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol",
        }
        service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol
    occurred
    
... skipping 27 lines ...
    • Failure [141.590 seconds]
    [sig-network] Services
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
      should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    
      Sep  9 21:16:06.490: Unexpected error:

          <*errors.errorString | 0xc003f74370>: {
              s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol",
          }
          service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol
      occurred
    
... skipping 56 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
      Basic StatefulSet functionality [StatefulSetBasic]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
        should perform canary updates and phased rolling updates of template modifications [Conformance]
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
    ------------------------------
    {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":89,"skipped":1449,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

    Sep  9 21:16:54.946: INFO: Running AfterSuite actions on all nodes
    
    
    [BeforeEach] [sig-apps] StatefulSet
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
    STEP: Creating a kubernetes client
... skipping 131 lines ...
    Sep  9 21:16:35.703: INFO: ss-2  k8s-upgrade-and-conformance-b2vx3j-md-0-zmp84-769c6df4b-xjfr7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 21:15:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 21:16:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-09-09 21:16:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-09-09 21:15:55 +0000 UTC  }]
    Sep  9 21:16:35.703: INFO: 
    Sep  9 21:16:35.703: INFO: StatefulSet ss has not reached scale 0, at 2
    STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7884
    Sep  9 21:16:36.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:16:36.866: INFO: rc: 1
    Sep  9 21:16:36.867: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    error: unable to upgrade connection: container not found ("webserver")

    
    error:

    exit status 1
    Sep  9 21:16:46.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:16:46.969: INFO: rc: 1
    Sep  9 21:16:46.970: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:16:56.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:16:57.063: INFO: rc: 1
    Sep  9 21:16:57.063: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:17:07.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:17:07.161: INFO: rc: 1
    Sep  9 21:17:07.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:17:17.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:17:17.260: INFO: rc: 1
    Sep  9 21:17:17.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:17:27.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:17:27.356: INFO: rc: 1
    Sep  9 21:17:27.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:17:37.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:17:37.455: INFO: rc: 1
    Sep  9 21:17:37.455: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:17:47.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:17:47.550: INFO: rc: 1
    Sep  9 21:17:47.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:17:57.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:17:57.662: INFO: rc: 1
    Sep  9 21:17:57.662: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:18:07.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:18:07.754: INFO: rc: 1
    Sep  9 21:18:07.754: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:18:17.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:18:17.855: INFO: rc: 1
    Sep  9 21:18:17.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:18:27.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:18:27.968: INFO: rc: 1
    Sep  9 21:18:27.968: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:18:37.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:18:38.068: INFO: rc: 1
    Sep  9 21:18:38.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:18:48.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:18:48.170: INFO: rc: 1
    Sep  9 21:18:48.170: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:18:58.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:18:58.277: INFO: rc: 1
    Sep  9 21:18:58.277: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:19:08.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:19:08.382: INFO: rc: 1
    Sep  9 21:19:08.382: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:19:18.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:19:18.481: INFO: rc: 1
    Sep  9 21:19:18.481: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:19:28.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:19:28.581: INFO: rc: 1
    Sep  9 21:19:28.582: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:19:38.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:19:38.739: INFO: rc: 1
    Sep  9 21:19:38.739: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:19:48.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:19:48.863: INFO: rc: 1
    Sep  9 21:19:48.864: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:19:58.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:19:59.252: INFO: rc: 1
    Sep  9 21:19:59.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:20:09.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:20:09.357: INFO: rc: 1
    Sep  9 21:20:09.357: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:20:19.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:20:19.455: INFO: rc: 1
    Sep  9 21:20:19.455: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:20:29.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:20:29.550: INFO: rc: 1
    Sep  9 21:20:29.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:20:39.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:20:39.657: INFO: rc: 1
    Sep  9 21:20:39.657: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:20:49.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:20:49.764: INFO: rc: 1
    Sep  9 21:20:49.764: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:20:59.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:20:59.867: INFO: rc: 1
    Sep  9 21:20:59.868: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:21:09.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:21:09.975: INFO: rc: 1
    Sep  9 21:21:09.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:21:19.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:21:20.076: INFO: rc: 1
    Sep  9 21:21:20.076: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:21:30.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:21:30.173: INFO: rc: 1
    Sep  9 21:21:30.173: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:

    Command stdout:
    
    stderr:
    Error from server (NotFound): pods "ss-1" not found

    
    error:

    exit status 1
    Sep  9 21:21:40.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-7884 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
    Sep  9 21:21:40.276: INFO: rc: 1
    Sep  9 21:21:40.276: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
    Sep  9 21:21:40.276: INFO: Scaling statefulset ss to 0
    Sep  9 21:21:40.296: INFO: Waiting for statefulset status.replicas updated to 0
... skipping 597 lines ...
  [INTERRUPTED] When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade] [ClusterClass] [It] Should create and upgrade a workload cluster and eventually run kubetest
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:118
  [INTERRUPTED] [SynchronizedAfterSuite] 
  /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/e2e_suite_test.go:169

Ran 1 of 21 Specs in 3544.006 seconds
FAIL! - Interrupted by Other Ginkgo Process -- 0 Passed | 1 Failed | 0 Pending | 20 Skipped


Ginkgo ran 1 suite in 1h0m14.934444928s

Test Suite Failed
make: *** [Makefile:129: run] Error 1
make: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e'
+ cleanup
++ pgrep -f 'docker events'
+ kill 25602
++ pgrep -f 'ctr -n moby events'
+ kill 25603
... skipping 23 lines ...